2023-05-31 10:52:40,435 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2 2023-05-31 10:52:40,447 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-31 10:52:40,477 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=297, ProcessCount=168, AvailableMemoryMB=10205 2023-05-31 10:52:40,485 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 10:52:40,485 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd, deleteOnExit=true 2023-05-31 10:52:40,486 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 10:52:40,487 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/test.cache.data in system properties and HBase conf 2023-05-31 10:52:40,487 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 10:52:40,488 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/hadoop.log.dir in system properties and HBase conf 2023-05-31 10:52:40,488 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 10:52:40,489 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 10:52:40,489 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 10:52:40,601 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-31 10:52:40,985 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 10:52:40,989 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:52:40,989 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:52:40,989 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 10:52:40,990 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:52:40,990 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 10:52:40,991 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 10:52:40,991 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:52:40,991 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:52:40,992 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 10:52:40,992 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/nfs.dump.dir in system properties and HBase conf 2023-05-31 10:52:40,993 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/java.io.tmpdir in system properties and HBase conf 2023-05-31 10:52:40,993 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:52:40,993 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 10:52:40,994 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 10:52:41,504 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:52:41,519 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:52:41,522 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:52:41,782 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-31 10:52:41,946 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-31 10:52:41,967 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:52:42,006 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:52:42,038 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/java.io.tmpdir/Jetty_localhost_localdomain_42593_hdfs____.3zyfdh/webapp 2023-05-31 10:52:42,204 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42593 2023-05-31 10:52:42,214 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:52:42,216 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:52:42,217 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:52:42,605 WARN [Listener at localhost.localdomain/40463] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:52:42,664 WARN [Listener at localhost.localdomain/40463] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:52:42,681 WARN [Listener at localhost.localdomain/40463] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:52:42,686 INFO [Listener at localhost.localdomain/40463] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:52:42,691 INFO [Listener at localhost.localdomain/40463] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/java.io.tmpdir/Jetty_localhost_36385_datanode____.58299o/webapp 2023-05-31 10:52:42,770 INFO [Listener at localhost.localdomain/40463] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36385 2023-05-31 10:52:43,051 WARN [Listener at localhost.localdomain/43505] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:52:43,060 WARN [Listener at localhost.localdomain/43505] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:52:43,064 WARN [Listener at localhost.localdomain/43505] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:52:43,067 INFO [Listener at localhost.localdomain/43505] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:52:43,073 INFO [Listener at localhost.localdomain/43505] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/java.io.tmpdir/Jetty_localhost_44381_datanode____.hkskkt/webapp 2023-05-31 10:52:43,161 INFO [Listener at localhost.localdomain/43505] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44381 2023-05-31 10:52:43,170 WARN [Listener at localhost.localdomain/44683] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:52:43,433 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa2e469e3d980ba14: Processing first storage report for DS-33de876a-af59-44c8-9808-3c75c1ec9b23 from datanode d893772c-8efc-4b90-81ee-7c89c8ad679b 2023-05-31 10:52:43,434 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa2e469e3d980ba14: from storage DS-33de876a-af59-44c8-9808-3c75c1ec9b23 node DatanodeRegistration(127.0.0.1:39527, datanodeUuid=d893772c-8efc-4b90-81ee-7c89c8ad679b, infoPort=44663, infoSecurePort=0, ipcPort=44683, storageInfo=lv=-57;cid=testClusterID;nsid=1369998552;c=1685530361587), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-31 10:52:43,434 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6f0a98e1a33f9f89: Processing first storage report for DS-ede34f34-04f1-475a-a899-7e8b57f1f57a from datanode 82a51e5b-7f62-4103-9d7d-c3443b595fa0 2023-05-31 10:52:43,435 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6f0a98e1a33f9f89: from storage DS-ede34f34-04f1-475a-a899-7e8b57f1f57a node DatanodeRegistration(127.0.0.1:40023, datanodeUuid=82a51e5b-7f62-4103-9d7d-c3443b595fa0, infoPort=36909, infoSecurePort=0, ipcPort=43505, storageInfo=lv=-57;cid=testClusterID;nsid=1369998552;c=1685530361587), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:52:43,435 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa2e469e3d980ba14: Processing first storage report for DS-6b159c54-6b1e-41b8-913e-e8075132911f from datanode d893772c-8efc-4b90-81ee-7c89c8ad679b 2023-05-31 10:52:43,435 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa2e469e3d980ba14: from storage DS-6b159c54-6b1e-41b8-913e-e8075132911f node DatanodeRegistration(127.0.0.1:39527, datanodeUuid=d893772c-8efc-4b90-81ee-7c89c8ad679b, infoPort=44663, infoSecurePort=0, ipcPort=44683, storageInfo=lv=-57;cid=testClusterID;nsid=1369998552;c=1685530361587), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:52:43,435 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6f0a98e1a33f9f89: Processing first storage report for DS-ce0a2e64-0cf9-4e3d-aa00-83ffa637459f from datanode 82a51e5b-7f62-4103-9d7d-c3443b595fa0 2023-05-31 10:52:43,435 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6f0a98e1a33f9f89: from storage DS-ce0a2e64-0cf9-4e3d-aa00-83ffa637459f node DatanodeRegistration(127.0.0.1:40023, datanodeUuid=82a51e5b-7f62-4103-9d7d-c3443b595fa0, infoPort=36909, infoSecurePort=0, ipcPort=43505, storageInfo=lv=-57;cid=testClusterID;nsid=1369998552;c=1685530361587), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:52:43,507 DEBUG [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2 2023-05-31 10:52:43,562 INFO [Listener at localhost.localdomain/44683] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/zookeeper_0, clientPort=58368, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 10:52:43,573 INFO [Listener at localhost.localdomain/44683] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58368 2023-05-31 10:52:43,580 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:43,582 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:44,200 INFO [Listener at localhost.localdomain/44683] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2 with version=8 2023-05-31 10:52:44,200 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase-staging 2023-05-31 10:52:44,439 INFO [Listener at localhost.localdomain/44683] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-31 10:52:44,807 INFO [Listener at localhost.localdomain/44683] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:52:44,831 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:52:44,832 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:52:44,832 INFO [Listener at localhost.localdomain/44683] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:52:44,832 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:52:44,832 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:52:44,945 INFO [Listener at localhost.localdomain/44683] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:52:45,018 DEBUG [Listener at localhost.localdomain/44683] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-31 10:52:45,095 INFO [Listener at localhost.localdomain/44683] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39993 2023-05-31 10:52:45,104 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:45,107 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:45,126 INFO [Listener at localhost.localdomain/44683] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39993 connecting to ZooKeeper ensemble=127.0.0.1:58368 2023-05-31 10:52:45,156 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:399930x0, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:52:45,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39993-0x101a1265ec50000 connected 2023-05-31 10:52:45,179 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:52:45,180 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:52:45,184 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:52:45,192 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39993 2023-05-31 10:52:45,193 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39993 2023-05-31 10:52:45,193 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39993 2023-05-31 10:52:45,194 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39993 2023-05-31 10:52:45,194 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39993 2023-05-31 10:52:45,199 INFO [Listener at localhost.localdomain/44683] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2, hbase.cluster.distributed=false 2023-05-31 10:52:45,254 INFO [Listener at localhost.localdomain/44683] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:52:45,254 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:52:45,254 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:52:45,255 INFO [Listener at localhost.localdomain/44683] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:52:45,255 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:52:45,255 INFO [Listener at localhost.localdomain/44683] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:52:45,259 INFO [Listener at localhost.localdomain/44683] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:52:45,262 INFO [Listener at localhost.localdomain/44683] ipc.NettyRpcServer(120): Bind to /148.251.75.209:41383 2023-05-31 10:52:45,263 INFO [Listener at localhost.localdomain/44683] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:52:45,268 DEBUG [Listener at localhost.localdomain/44683] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:52:45,270 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:45,271 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:45,272 INFO [Listener at localhost.localdomain/44683] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41383 connecting to ZooKeeper ensemble=127.0.0.1:58368 2023-05-31 10:52:45,277 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:413830x0, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:52:45,278 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41383-0x101a1265ec50001 connected 2023-05-31 10:52:45,278 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ZKUtil(164): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:52:45,280 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ZKUtil(164): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:52:45,280 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ZKUtil(164): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:52:45,281 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41383 2023-05-31 10:52:45,281 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41383 2023-05-31 10:52:45,281 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41383 2023-05-31 10:52:45,282 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41383 2023-05-31 10:52:45,282 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41383 2023-05-31 10:52:45,284 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:45,293 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:52:45,294 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:45,312 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:52:45,312 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:52:45,312 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:45,314 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:52:45,315 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,39993,1685530364309 from backup master directory 2023-05-31 10:52:45,315 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:52:45,317 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:45,317 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:52:45,318 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:52:45,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:45,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-31 10:52:45,321 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-31 10:52:45,406 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase.id with ID: ca876aec-860c-4b44-9087-d6585658750d 2023-05-31 10:52:45,458 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:45,474 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:45,516 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0418c617 to 127.0.0.1:58368 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:52:45,544 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72745baf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:52:45,563 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:52:45,565 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 10:52:45,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:52:45,598 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store-tmp 2023-05-31 10:52:45,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:45,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:52:45,627 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:52:45,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:52:45,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:52:45,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:52:45,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:52:45,628 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:52:45,629 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/WALs/jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:45,648 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39993%2C1685530364309, suffix=, logDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/WALs/jenkins-hbase20.apache.org,39993,1685530364309, archiveDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/oldWALs, maxLogs=10 2023-05-31 10:52:45,665 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:52:45,686 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/WALs/jenkins-hbase20.apache.org,39993,1685530364309/jenkins-hbase20.apache.org%2C39993%2C1685530364309.1685530365664 2023-05-31 10:52:45,687 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK], DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK]] 2023-05-31 10:52:45,687 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:52:45,688 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:45,690 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:52:45,691 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:52:45,750 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:52:45,757 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 10:52:45,780 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 10:52:45,792 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:45,797 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:52:45,798 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:52:45,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:52:45,814 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:52:45,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=847409, jitterRate=0.07753664255142212}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:52:45,815 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:52:45,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 10:52:45,832 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 10:52:45,832 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 10:52:45,834 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 10:52:45,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-31 10:52:45,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 28 msec 2023-05-31 10:52:45,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 10:52:45,888 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 10:52:45,894 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 10:52:45,919 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 10:52:45,922 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 10:52:45,924 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 10:52:45,928 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 10:52:45,932 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 10:52:45,935 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:45,936 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 10:52:45,937 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 10:52:45,947 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 10:52:45,950 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:52:45,950 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:52:45,951 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:45,951 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,39993,1685530364309, sessionid=0x101a1265ec50000, setting cluster-up flag (Was=false) 2023-05-31 10:52:45,967 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:45,971 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 10:52:45,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:45,992 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:46,017 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 10:52:46,018 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:46,020 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.hbase-snapshot/.tmp 2023-05-31 10:52:46,086 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(951): ClusterId : ca876aec-860c-4b44-9087-d6585658750d 2023-05-31 10:52:46,090 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:52:46,094 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:52:46,094 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:52:46,097 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:52:46,097 DEBUG [RS:0;jenkins-hbase20:41383] zookeeper.ReadOnlyZKClient(139): Connect 0x0fc3f1db to 127.0.0.1:58368 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:52:46,110 DEBUG [RS:0;jenkins-hbase20:41383] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1041e53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:52:46,111 DEBUG [RS:0;jenkins-hbase20:41383] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53b9aab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:52:46,116 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 10:52:46,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:52:46,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:52:46,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:52:46,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:52:46,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-31 10:52:46,126 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,127 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:52:46,127 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,131 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:41383 2023-05-31 10:52:46,132 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685530396132 2023-05-31 10:52:46,134 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 10:52:46,136 INFO [RS:0;jenkins-hbase20:41383] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:52:46,136 INFO [RS:0;jenkins-hbase20:41383] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:52:46,137 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:52:46,138 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:52:46,138 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 10:52:46,139 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,39993,1685530364309 with isa=jenkins-hbase20.apache.org/148.251.75.209:41383, startcode=1685530365253 2023-05-31 10:52:46,143 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:52:46,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 10:52:46,154 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 10:52:46,155 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 10:52:46,155 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 10:52:46,155 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 10:52:46,156 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,156 DEBUG [RS:0;jenkins-hbase20:41383] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:52:46,159 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 10:52:46,161 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 10:52:46,161 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 10:52:46,172 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 10:52:46,173 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 10:52:46,178 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530366177,5,FailOnTimeoutGroup] 2023-05-31 10:52:46,182 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530366180,5,FailOnTimeoutGroup] 2023-05-31 10:52:46,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 10:52:46,185 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,185 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,189 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:52:46,190 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:52:46,190 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2 2023-05-31 10:52:46,211 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:46,214 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:52:46,217 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/info 2023-05-31 10:52:46,218 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:52:46,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:46,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:52:46,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:52:46,225 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:52:46,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:46,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:52:46,229 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/table 2023-05-31 10:52:46,230 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:52:46,232 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:46,234 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740 2023-05-31 10:52:46,236 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740 2023-05-31 10:52:46,240 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:52:46,242 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:52:46,246 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:52:46,248 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=786458, jitterRate=3.4049153327941895E-5}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:52:46,248 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:52:46,248 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:52:46,248 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:52:46,248 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:52:46,248 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:52:46,248 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:52:46,249 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:52:46,250 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:52:46,254 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:52:46,254 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 10:52:46,262 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 10:52:46,274 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 10:52:46,277 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 10:52:46,293 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41909, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:52:46,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,319 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2 2023-05-31 10:52:46,319 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40463 2023-05-31 10:52:46,319 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:52:46,323 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:52:46,324 DEBUG [RS:0;jenkins-hbase20:41383] zookeeper.ZKUtil(162): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,325 WARN [RS:0;jenkins-hbase20:41383] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:52:46,325 INFO [RS:0;jenkins-hbase20:41383] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:52:46,326 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,327 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,41383,1685530365253] 2023-05-31 10:52:46,336 DEBUG [RS:0;jenkins-hbase20:41383] zookeeper.ZKUtil(162): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,344 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:52:46,351 INFO [RS:0;jenkins-hbase20:41383] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:52:46,367 INFO [RS:0;jenkins-hbase20:41383] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:52:46,370 INFO [RS:0;jenkins-hbase20:41383] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:52:46,370 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,371 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:52:46,377 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,377 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,377 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,377 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,377 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,378 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,378 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:52:46,378 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,378 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,378 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,378 DEBUG [RS:0;jenkins-hbase20:41383] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:52:46,379 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,379 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,379 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,392 INFO [RS:0;jenkins-hbase20:41383] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:52:46,393 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41383,1685530365253-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,406 INFO [RS:0;jenkins-hbase20:41383] regionserver.Replication(203): jenkins-hbase20.apache.org,41383,1685530365253 started 2023-05-31 10:52:46,406 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,41383,1685530365253, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:41383, sessionid=0x101a1265ec50001 2023-05-31 10:52:46,406 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:52:46,406 DEBUG [RS:0;jenkins-hbase20:41383] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,406 DEBUG [RS:0;jenkins-hbase20:41383] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41383,1685530365253' 2023-05-31 10:52:46,406 DEBUG [RS:0;jenkins-hbase20:41383] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:52:46,407 DEBUG [RS:0;jenkins-hbase20:41383] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:52:46,408 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:52:46,408 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:52:46,408 DEBUG [RS:0;jenkins-hbase20:41383] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,408 DEBUG [RS:0;jenkins-hbase20:41383] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41383,1685530365253' 2023-05-31 10:52:46,408 DEBUG [RS:0;jenkins-hbase20:41383] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:52:46,408 DEBUG [RS:0;jenkins-hbase20:41383] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:52:46,409 DEBUG [RS:0;jenkins-hbase20:41383] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:52:46,409 INFO [RS:0;jenkins-hbase20:41383] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:52:46,409 INFO [RS:0;jenkins-hbase20:41383] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:52:46,429 DEBUG [jenkins-hbase20:39993] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 10:52:46,432 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,41383,1685530365253, state=OPENING 2023-05-31 10:52:46,440 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 10:52:46,441 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:46,442 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:52:46,446 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,41383,1685530365253}] 2023-05-31 10:52:46,525 INFO [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41383%2C1685530365253, suffix=, logDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253, archiveDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/oldWALs, maxLogs=32 2023-05-31 10:52:46,540 INFO [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530366529 2023-05-31 10:52:46,540 DEBUG [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:52:46,631 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:46,633 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:52:46,637 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43358, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:52:46,649 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 10:52:46,650 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:52:46,653 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41383%2C1685530365253.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253, archiveDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/oldWALs, maxLogs=32 2023-05-31 10:52:46,670 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.meta.1685530366655.meta 2023-05-31 10:52:46,670 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:52:46,671 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:52:46,672 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 10:52:46,689 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 10:52:46,693 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 10:52:46,697 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 10:52:46,697 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:46,697 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 10:52:46,697 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 10:52:46,700 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:52:46,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/info 2023-05-31 10:52:46,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/info 2023-05-31 10:52:46,703 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:52:46,704 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:46,704 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:52:46,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:52:46,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:52:46,706 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:52:46,707 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:46,707 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:52:46,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/table 2023-05-31 10:52:46,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/table 2023-05-31 10:52:46,709 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:52:46,709 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:46,711 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740 2023-05-31 10:52:46,714 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740 2023-05-31 10:52:46,717 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:52:46,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:52:46,721 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=779739, jitterRate=-0.008511483669281006}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:52:46,721 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:52:46,732 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685530366625 2023-05-31 10:52:46,747 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 10:52:46,748 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 10:52:46,748 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,41383,1685530365253, state=OPEN 2023-05-31 10:52:46,750 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 10:52:46,750 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:52:46,755 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 10:52:46,756 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,41383,1685530365253 in 304 msec 2023-05-31 10:52:46,761 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 10:52:46,761 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 495 msec 2023-05-31 10:52:46,769 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 706 msec 2023-05-31 10:52:46,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685530366769, completionTime=-1 2023-05-31 10:52:46,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 10:52:46,770 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 10:52:46,823 DEBUG [hconnection-0x4b1f2159-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:52:46,825 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43368, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:52:46,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 10:52:46,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685530426841 2023-05-31 10:52:46,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685530486841 2023-05-31 10:52:46,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 71 msec 2023-05-31 10:52:46,869 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39993,1685530364309-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,870 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39993,1685530364309-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,870 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39993,1685530364309-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,871 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:39993, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,872 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 10:52:46,878 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 10:52:46,887 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 10:52:46,888 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:52:46,896 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 10:52:46,899 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:52:46,902 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:52:46,923 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:46,926 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681 empty. 2023-05-31 10:52:46,927 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:46,927 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 10:52:46,979 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 10:52:46,981 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5e6e02193d1e6dfbf9505e17edf56681, NAME => 'hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp 2023-05-31 10:52:47,001 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:47,001 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5e6e02193d1e6dfbf9505e17edf56681, disabling compactions & flushes 2023-05-31 10:52:47,002 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,002 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,002 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. after waiting 0 ms 2023-05-31 10:52:47,002 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,002 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,002 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5e6e02193d1e6dfbf9505e17edf56681: 2023-05-31 10:52:47,007 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:52:47,025 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530367010"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530367010"}]},"ts":"1685530367010"} 2023-05-31 10:52:47,049 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:52:47,051 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:52:47,055 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530367051"}]},"ts":"1685530367051"} 2023-05-31 10:52:47,061 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 10:52:47,070 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e6e02193d1e6dfbf9505e17edf56681, ASSIGN}] 2023-05-31 10:52:47,074 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e6e02193d1e6dfbf9505e17edf56681, ASSIGN 2023-05-31 10:52:47,076 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5e6e02193d1e6dfbf9505e17edf56681, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,41383,1685530365253; forceNewPlan=false, retain=false 2023-05-31 10:52:47,227 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5e6e02193d1e6dfbf9505e17edf56681, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:47,228 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530367227"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530367227"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530367227"}]},"ts":"1685530367227"} 2023-05-31 10:52:47,235 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 5e6e02193d1e6dfbf9505e17edf56681, server=jenkins-hbase20.apache.org,41383,1685530365253}] 2023-05-31 10:52:47,400 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,401 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e6e02193d1e6dfbf9505e17edf56681, NAME => 'hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:52:47,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,403 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:47,403 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,403 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,405 INFO [StoreOpener-5e6e02193d1e6dfbf9505e17edf56681-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,408 DEBUG [StoreOpener-5e6e02193d1e6dfbf9505e17edf56681-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/info 2023-05-31 10:52:47,408 DEBUG [StoreOpener-5e6e02193d1e6dfbf9505e17edf56681-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/info 2023-05-31 10:52:47,409 INFO [StoreOpener-5e6e02193d1e6dfbf9505e17edf56681-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e6e02193d1e6dfbf9505e17edf56681 columnFamilyName info 2023-05-31 10:52:47,412 INFO [StoreOpener-5e6e02193d1e6dfbf9505e17edf56681-1] regionserver.HStore(310): Store=5e6e02193d1e6dfbf9505e17edf56681/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:47,414 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,415 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,420 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:52:47,423 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:52:47,424 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5e6e02193d1e6dfbf9505e17edf56681; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=711011, jitterRate=-0.09590306878089905}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:52:47,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5e6e02193d1e6dfbf9505e17edf56681: 2023-05-31 10:52:47,426 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681., pid=6, masterSystemTime=1685530367390 2023-05-31 10:52:47,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,430 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:52:47,431 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=5e6e02193d1e6dfbf9505e17edf56681, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:47,431 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530367430"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530367430"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530367430"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530367430"}]},"ts":"1685530367430"} 2023-05-31 10:52:47,438 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 10:52:47,438 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 5e6e02193d1e6dfbf9505e17edf56681, server=jenkins-hbase20.apache.org,41383,1685530365253 in 200 msec 2023-05-31 10:52:47,441 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 10:52:47,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5e6e02193d1e6dfbf9505e17edf56681, ASSIGN in 368 msec 2023-05-31 10:52:47,443 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:52:47,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530367443"}]},"ts":"1685530367443"} 2023-05-31 10:52:47,446 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 10:52:47,594 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 10:52:47,594 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:52:47,598 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 706 msec 2023-05-31 10:52:47,802 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:52:47,803 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:47,844 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 10:52:47,862 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:52:47,868 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 30 msec 2023-05-31 10:52:47,879 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 10:52:47,892 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:52:47,898 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-05-31 10:52:47,909 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 10:52:47,910 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 10:52:47,913 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.594sec 2023-05-31 10:52:47,915 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 10:52:47,917 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 10:52:47,917 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 10:52:47,918 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39993,1685530364309-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 10:52:47,919 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39993,1685530364309-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 10:52:47,928 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 10:52:47,994 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ReadOnlyZKClient(139): Connect 0x5f199977 to 127.0.0.1:58368 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:52:48,001 DEBUG [Listener at localhost.localdomain/44683] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f7466cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:52:48,013 DEBUG [hconnection-0x26b1719-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:52:48,024 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43380, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:52:48,033 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:52:48,033 INFO [Listener at localhost.localdomain/44683] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:52:48,041 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 10:52:48,041 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:52:48,042 INFO [Listener at localhost.localdomain/44683] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 10:52:48,050 DEBUG [Listener at localhost.localdomain/44683] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 10:52:48,054 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44858, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 10:52:48,064 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 10:52:48,064 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 10:52:48,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:52:48,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-31 10:52:48,072 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:52:48,074 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:52:48,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-31 10:52:48,079 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,081 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1 empty. 2023-05-31 10:52:48,083 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,083 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-31 10:52:48,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:52:48,104 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 10:52:48,106 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 657c47879aa53a40b22ce7ed4c914df1, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/.tmp 2023-05-31 10:52:48,122 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:48,122 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 657c47879aa53a40b22ce7ed4c914df1, disabling compactions & flushes 2023-05-31 10:52:48,123 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,123 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,123 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. after waiting 0 ms 2023-05-31 10:52:48,123 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,123 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,123 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:52:48,127 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:52:48,129 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685530368129"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530368129"}]},"ts":"1685530368129"} 2023-05-31 10:52:48,132 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:52:48,134 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:52:48,134 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530368134"}]},"ts":"1685530368134"} 2023-05-31 10:52:48,137 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-31 10:52:48,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=657c47879aa53a40b22ce7ed4c914df1, ASSIGN}] 2023-05-31 10:52:48,142 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=657c47879aa53a40b22ce7ed4c914df1, ASSIGN 2023-05-31 10:52:48,144 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=657c47879aa53a40b22ce7ed4c914df1, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,41383,1685530365253; forceNewPlan=false, retain=false 2023-05-31 10:52:48,295 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=657c47879aa53a40b22ce7ed4c914df1, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:48,296 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685530368295"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530368295"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530368295"}]},"ts":"1685530368295"} 2023-05-31 10:52:48,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 657c47879aa53a40b22ce7ed4c914df1, server=jenkins-hbase20.apache.org,41383,1685530365253}] 2023-05-31 10:52:48,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 657c47879aa53a40b22ce7ed4c914df1, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:52:48,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:52:48,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,474 INFO [StoreOpener-657c47879aa53a40b22ce7ed4c914df1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,477 DEBUG [StoreOpener-657c47879aa53a40b22ce7ed4c914df1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info 2023-05-31 10:52:48,477 DEBUG [StoreOpener-657c47879aa53a40b22ce7ed4c914df1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info 2023-05-31 10:52:48,477 INFO [StoreOpener-657c47879aa53a40b22ce7ed4c914df1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 657c47879aa53a40b22ce7ed4c914df1 columnFamilyName info 2023-05-31 10:52:48,478 INFO [StoreOpener-657c47879aa53a40b22ce7ed4c914df1-1] regionserver.HStore(310): Store=657c47879aa53a40b22ce7ed4c914df1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:52:48,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:52:48,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:52:48,490 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 657c47879aa53a40b22ce7ed4c914df1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=780629, jitterRate=-0.007379904389381409}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:52:48,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:52:48,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1., pid=11, masterSystemTime=1685530368458 2023-05-31 10:52:48,494 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:52:48,495 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=657c47879aa53a40b22ce7ed4c914df1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:52:48,495 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685530368495"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530368495"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530368495"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530368495"}]},"ts":"1685530368495"} 2023-05-31 10:52:48,501 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 10:52:48,502 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 657c47879aa53a40b22ce7ed4c914df1, server=jenkins-hbase20.apache.org,41383,1685530365253 in 195 msec 2023-05-31 10:52:48,505 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 10:52:48,505 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=657c47879aa53a40b22ce7ed4c914df1, ASSIGN in 362 msec 2023-05-31 10:52:48,507 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:52:48,507 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530368507"}]},"ts":"1685530368507"} 2023-05-31 10:52:48,509 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-31 10:52:48,512 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:52:48,514 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 444 msec 2023-05-31 10:52:52,216 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-31 10:52:52,350 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 10:52:52,352 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 10:52:52,354 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-31 10:52:54,436 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 10:52:54,437 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-31 10:52:58,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39993] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:52:58,101 INFO [Listener at localhost.localdomain/44683] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-31 10:52:58,110 DEBUG [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-31 10:52:58,111 DEBUG [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:53:10,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41383] regionserver.HRegion(9158): Flush requested on 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:53:10,170 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 657c47879aa53a40b22ce7ed4c914df1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:53:10,244 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/4dd2b5874ee441e9abb5501f30a782cf 2023-05-31 10:53:10,288 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/4dd2b5874ee441e9abb5501f30a782cf as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf 2023-05-31 10:53:10,302 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf, entries=7, sequenceid=11, filesize=12.1 K 2023-05-31 10:53:10,304 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 657c47879aa53a40b22ce7ed4c914df1 in 134ms, sequenceid=11, compaction requested=false 2023-05-31 10:53:10,305 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:53:18,392 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:20,599 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:22,807 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:25,012 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:25,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41383] regionserver.HRegion(9158): Flush requested on 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:53:25,013 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 657c47879aa53a40b22ce7ed4c914df1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:53:25,217 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:25,242 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/d70ddd7989774092bb106e3168c53373 2023-05-31 10:53:25,258 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/d70ddd7989774092bb106e3168c53373 as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d70ddd7989774092bb106e3168c53373 2023-05-31 10:53:25,270 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d70ddd7989774092bb106e3168c53373, entries=7, sequenceid=21, filesize=12.1 K 2023-05-31 10:53:25,473 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:25,474 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 657c47879aa53a40b22ce7ed4c914df1 in 460ms, sequenceid=21, compaction requested=false 2023-05-31 10:53:25,475 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:53:25,475 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-31 10:53:25,475 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:53:25,478 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf because midkey is the same as first or last row 2023-05-31 10:53:27,218 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:29,422 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:29,423 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41383%2C1685530365253:(num 1685530366529) roll requested 2023-05-31 10:53:29,424 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 203 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:29,644 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 205 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:29,646 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530366529 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530409424 2023-05-31 10:53:29,659 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:29,660 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530366529 is not closed yet, will try archiving it next time 2023-05-31 10:53:39,442 INFO [Listener at localhost.localdomain/44683] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 10:53:44,446 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:44,446 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:44,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41383] regionserver.HRegion(9158): Flush requested on 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:53:44,446 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41383%2C1685530365253:(num 1685530409424) roll requested 2023-05-31 10:53:44,446 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 657c47879aa53a40b22ce7ed4c914df1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:53:46,447 INFO [Listener at localhost.localdomain/44683] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 10:53:49,448 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:49,449 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:49,467 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:49,467 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK], DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK]] 2023-05-31 10:53:49,469 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530409424 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530424447 2023-05-31 10:53:49,469 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39527,DS-33de876a-af59-44c8-9808-3c75c1ec9b23,DISK], DatanodeInfoWithStorage[127.0.0.1:40023,DS-ede34f34-04f1-475a-a899-7e8b57f1f57a,DISK]] 2023-05-31 10:53:49,469 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253/jenkins-hbase20.apache.org%2C41383%2C1685530365253.1685530409424 is not closed yet, will try archiving it next time 2023-05-31 10:53:49,472 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/d799dd8f88764152b8d29b10a640ca91 2023-05-31 10:53:49,483 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/d799dd8f88764152b8d29b10a640ca91 as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d799dd8f88764152b8d29b10a640ca91 2023-05-31 10:53:49,492 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d799dd8f88764152b8d29b10a640ca91, entries=7, sequenceid=31, filesize=12.1 K 2023-05-31 10:53:49,496 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 657c47879aa53a40b22ce7ed4c914df1 in 5050ms, sequenceid=31, compaction requested=true 2023-05-31 10:53:49,496 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:53:49,496 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-31 10:53:49,496 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:53:49,496 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf because midkey is the same as first or last row 2023-05-31 10:53:49,498 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:53:49,498 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:53:49,502 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:53:49,504 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.HStore(1912): 657c47879aa53a40b22ce7ed4c914df1/info is initiating minor compaction (all files) 2023-05-31 10:53:49,504 INFO [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 657c47879aa53a40b22ce7ed4c914df1/info in TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:53:49,504 INFO [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d70ddd7989774092bb106e3168c53373, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d799dd8f88764152b8d29b10a640ca91] into tmpdir=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp, totalSize=36.3 K 2023-05-31 10:53:49,505 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] compactions.Compactor(207): Compacting 4dd2b5874ee441e9abb5501f30a782cf, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685530378117 2023-05-31 10:53:49,506 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] compactions.Compactor(207): Compacting d70ddd7989774092bb106e3168c53373, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685530392172 2023-05-31 10:53:49,506 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] compactions.Compactor(207): Compacting d799dd8f88764152b8d29b10a640ca91, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685530407015 2023-05-31 10:53:49,529 INFO [RS:0;jenkins-hbase20:41383-shortCompactions-0] throttle.PressureAwareThroughputController(145): 657c47879aa53a40b22ce7ed4c914df1#info#compaction#3 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:53:49,551 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/402e2b99bc0045bc9a953ca354252508 as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/402e2b99bc0045bc9a953ca354252508 2023-05-31 10:53:49,567 INFO [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 657c47879aa53a40b22ce7ed4c914df1/info of 657c47879aa53a40b22ce7ed4c914df1 into 402e2b99bc0045bc9a953ca354252508(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:53:49,568 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:53:49,568 INFO [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1., storeName=657c47879aa53a40b22ce7ed4c914df1/info, priority=13, startTime=1685530429498; duration=0sec 2023-05-31 10:53:49,569 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-31 10:53:49,570 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:53:49,570 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/402e2b99bc0045bc9a953ca354252508 because midkey is the same as first or last row 2023-05-31 10:53:49,571 DEBUG [RS:0;jenkins-hbase20:41383-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:54:01,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41383] regionserver.HRegion(9158): Flush requested on 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:54:01,575 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 657c47879aa53a40b22ce7ed4c914df1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:54:01,598 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/6117db37900d4615b7fa49d39fb222b2 2023-05-31 10:54:01,609 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/6117db37900d4615b7fa49d39fb222b2 as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/6117db37900d4615b7fa49d39fb222b2 2023-05-31 10:54:01,617 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/6117db37900d4615b7fa49d39fb222b2, entries=7, sequenceid=42, filesize=12.1 K 2023-05-31 10:54:01,619 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 657c47879aa53a40b22ce7ed4c914df1 in 43ms, sequenceid=42, compaction requested=false 2023-05-31 10:54:01,619 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:54:01,619 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-31 10:54:01,619 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:54:01,619 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/402e2b99bc0045bc9a953ca354252508 because midkey is the same as first or last row 2023-05-31 10:54:09,591 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 10:54:09,592 INFO [Listener at localhost.localdomain/44683] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 10:54:09,592 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f199977 to 127.0.0.1:58368 2023-05-31 10:54:09,592 DEBUG [Listener at localhost.localdomain/44683] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:09,593 DEBUG [Listener at localhost.localdomain/44683] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 10:54:09,593 DEBUG [Listener at localhost.localdomain/44683] util.JVMClusterUtil(257): Found active master hash=620944073, stopped=false 2023-05-31 10:54:09,593 INFO [Listener at localhost.localdomain/44683] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:54:09,595 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:09,595 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:09,595 INFO [Listener at localhost.localdomain/44683] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 10:54:09,595 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:09,596 DEBUG [Listener at localhost.localdomain/44683] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0418c617 to 127.0.0.1:58368 2023-05-31 10:54:09,596 DEBUG [Listener at localhost.localdomain/44683] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:09,596 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:09,596 INFO [Listener at localhost.localdomain/44683] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,41383,1685530365253' ***** 2023-05-31 10:54:09,596 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:09,596 INFO [Listener at localhost.localdomain/44683] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:54:09,597 INFO [RS:0;jenkins-hbase20:41383] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:54:09,597 INFO [RS:0;jenkins-hbase20:41383] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:54:09,597 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:54:09,597 INFO [RS:0;jenkins-hbase20:41383] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:54:09,597 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(3303): Received CLOSE for 5e6e02193d1e6dfbf9505e17edf56681 2023-05-31 10:54:09,598 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(3303): Received CLOSE for 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:54:09,598 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:54:09,599 DEBUG [RS:0;jenkins-hbase20:41383] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0fc3f1db to 127.0.0.1:58368 2023-05-31 10:54:09,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5e6e02193d1e6dfbf9505e17edf56681, disabling compactions & flushes 2023-05-31 10:54:09,599 DEBUG [RS:0;jenkins-hbase20:41383] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:09,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:54:09,599 INFO [RS:0;jenkins-hbase20:41383] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:54:09,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:54:09,599 INFO [RS:0;jenkins-hbase20:41383] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:54:09,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. after waiting 0 ms 2023-05-31 10:54:09,599 INFO [RS:0;jenkins-hbase20:41383] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:54:09,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:54:09,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 5e6e02193d1e6dfbf9505e17edf56681 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 10:54:09,599 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:54:09,599 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 10:54:09,600 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1478): Online Regions={5e6e02193d1e6dfbf9505e17edf56681=hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681., 1588230740=hbase:meta,,1.1588230740, 657c47879aa53a40b22ce7ed4c914df1=TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.} 2023-05-31 10:54:09,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:54:09,600 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:54:09,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:54:09,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:54:09,601 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:54:09,601 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-31 10:54:09,602 DEBUG [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1504): Waiting on 1588230740, 5e6e02193d1e6dfbf9505e17edf56681, 657c47879aa53a40b22ce7ed4c914df1 2023-05-31 10:54:09,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/.tmp/info/3e210d235601468aa906bb4182111994 2023-05-31 10:54:09,631 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/.tmp/info/f9a3360a773a43108afc66c9d90a122b 2023-05-31 10:54:09,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/.tmp/info/3e210d235601468aa906bb4182111994 as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/info/3e210d235601468aa906bb4182111994 2023-05-31 10:54:09,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/info/3e210d235601468aa906bb4182111994, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 10:54:09,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 5e6e02193d1e6dfbf9505e17edf56681 in 54ms, sequenceid=6, compaction requested=false 2023-05-31 10:54:09,655 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/.tmp/table/510c3b979f174783a6d22efc802b18b4 2023-05-31 10:54:09,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/namespace/5e6e02193d1e6dfbf9505e17edf56681/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 10:54:09,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:54:09,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5e6e02193d1e6dfbf9505e17edf56681: 2023-05-31 10:54:09,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685530366887.5e6e02193d1e6dfbf9505e17edf56681. 2023-05-31 10:54:09,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 657c47879aa53a40b22ce7ed4c914df1, disabling compactions & flushes 2023-05-31 10:54:09,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:54:09,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:54:09,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. after waiting 0 ms 2023-05-31 10:54:09,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:54:09,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 657c47879aa53a40b22ce7ed4c914df1 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-31 10:54:09,665 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/.tmp/info/f9a3360a773a43108afc66c9d90a122b as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/info/f9a3360a773a43108afc66c9d90a122b 2023-05-31 10:54:09,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/123a2cda17c8457aa9badad5586f4a5d 2023-05-31 10:54:09,677 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/info/f9a3360a773a43108afc66c9d90a122b, entries=20, sequenceid=14, filesize=7.4 K 2023-05-31 10:54:09,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/.tmp/table/510c3b979f174783a6d22efc802b18b4 as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/table/510c3b979f174783a6d22efc802b18b4 2023-05-31 10:54:09,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/.tmp/info/123a2cda17c8457aa9badad5586f4a5d as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/123a2cda17c8457aa9badad5586f4a5d 2023-05-31 10:54:09,686 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/table/510c3b979f174783a6d22efc802b18b4, entries=4, sequenceid=14, filesize=4.8 K 2023-05-31 10:54:09,687 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 86ms, sequenceid=14, compaction requested=false 2023-05-31 10:54:09,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/123a2cda17c8457aa9badad5586f4a5d, entries=3, sequenceid=48, filesize=7.9 K 2023-05-31 10:54:09,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 657c47879aa53a40b22ce7ed4c914df1 in 36ms, sequenceid=48, compaction requested=true 2023-05-31 10:54:09,701 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d70ddd7989774092bb106e3168c53373, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d799dd8f88764152b8d29b10a640ca91] to archive 2023-05-31 10:54:09,702 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 10:54:09,704 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-31 10:54:09,705 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 10:54:09,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:54:09,707 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:54:09,707 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 10:54:09,711 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/archive/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/4dd2b5874ee441e9abb5501f30a782cf 2023-05-31 10:54:09,713 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d70ddd7989774092bb106e3168c53373 to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/archive/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d70ddd7989774092bb106e3168c53373 2023-05-31 10:54:09,715 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d799dd8f88764152b8d29b10a640ca91 to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/archive/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/info/d799dd8f88764152b8d29b10a640ca91 2023-05-31 10:54:09,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/data/default/TestLogRolling-testSlowSyncLogRolling/657c47879aa53a40b22ce7ed4c914df1/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-31 10:54:09,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:54:09,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 657c47879aa53a40b22ce7ed4c914df1: 2023-05-31 10:54:09,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685530368064.657c47879aa53a40b22ce7ed4c914df1. 2023-05-31 10:54:09,802 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,41383,1685530365253; all regions closed. 2023-05-31 10:54:09,805 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:54:09,818 DEBUG [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/oldWALs 2023-05-31 10:54:09,819 INFO [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C41383%2C1685530365253.meta:.meta(num 1685530366655) 2023-05-31 10:54:09,819 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/WALs/jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:54:09,830 DEBUG [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/oldWALs 2023-05-31 10:54:09,830 INFO [RS:0;jenkins-hbase20:41383] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C41383%2C1685530365253:(num 1685530424447) 2023-05-31 10:54:09,830 DEBUG [RS:0;jenkins-hbase20:41383] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:09,830 INFO [RS:0;jenkins-hbase20:41383] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:54:09,831 INFO [RS:0;jenkins-hbase20:41383] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 10:54:09,831 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:54:09,831 INFO [RS:0;jenkins-hbase20:41383] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:41383 2023-05-31 10:54:09,837 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41383,1685530365253 2023-05-31 10:54:09,837 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:09,837 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:09,838 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,41383,1685530365253] 2023-05-31 10:54:09,838 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,41383,1685530365253; numProcessing=1 2023-05-31 10:54:09,840 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,41383,1685530365253 already deleted, retry=false 2023-05-31 10:54:09,840 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,41383,1685530365253 expired; onlineServers=0 2023-05-31 10:54:09,840 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39993,1685530364309' ***** 2023-05-31 10:54:09,840 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 10:54:09,840 DEBUG [M:0;jenkins-hbase20:39993] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@643fd0f2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:54:09,840 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:54:09,840 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39993,1685530364309; all regions closed. 2023-05-31 10:54:09,840 DEBUG [M:0;jenkins-hbase20:39993] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:09,840 DEBUG [M:0;jenkins-hbase20:39993] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 10:54:09,841 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 10:54:09,841 DEBUG [M:0;jenkins-hbase20:39993] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 10:54:09,841 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530366180] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530366180,5,FailOnTimeoutGroup] 2023-05-31 10:54:09,841 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530366177] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530366177,5,FailOnTimeoutGroup] 2023-05-31 10:54:09,842 INFO [M:0;jenkins-hbase20:39993] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 10:54:09,843 INFO [M:0;jenkins-hbase20:39993] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 10:54:09,843 INFO [M:0;jenkins-hbase20:39993] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-31 10:54:09,843 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 10:54:09,843 DEBUG [M:0;jenkins-hbase20:39993] master.HMaster(1512): Stopping service threads 2023-05-31 10:54:09,843 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:09,843 INFO [M:0;jenkins-hbase20:39993] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 10:54:09,844 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:54:09,844 INFO [M:0;jenkins-hbase20:39993] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 10:54:09,844 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 10:54:09,845 DEBUG [M:0;jenkins-hbase20:39993] zookeeper.ZKUtil(398): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 10:54:09,845 WARN [M:0;jenkins-hbase20:39993] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 10:54:09,845 INFO [M:0;jenkins-hbase20:39993] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 10:54:09,845 INFO [M:0;jenkins-hbase20:39993] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 10:54:09,846 DEBUG [M:0;jenkins-hbase20:39993] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:54:09,846 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:09,846 DEBUG [M:0;jenkins-hbase20:39993] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:09,846 DEBUG [M:0;jenkins-hbase20:39993] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:54:09,846 DEBUG [M:0;jenkins-hbase20:39993] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:09,846 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.30 KB heapSize=46.76 KB 2023-05-31 10:54:09,865 INFO [M:0;jenkins-hbase20:39993] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.30 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2a9764065e334edea661062ecece22fe 2023-05-31 10:54:09,870 INFO [M:0;jenkins-hbase20:39993] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2a9764065e334edea661062ecece22fe 2023-05-31 10:54:09,872 DEBUG [M:0;jenkins-hbase20:39993] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2a9764065e334edea661062ecece22fe as hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2a9764065e334edea661062ecece22fe 2023-05-31 10:54:09,877 INFO [M:0;jenkins-hbase20:39993] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2a9764065e334edea661062ecece22fe 2023-05-31 10:54:09,877 INFO [M:0;jenkins-hbase20:39993] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2a9764065e334edea661062ecece22fe, entries=11, sequenceid=100, filesize=6.1 K 2023-05-31 10:54:09,878 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegion(2948): Finished flush of dataSize ~38.30 KB/39222, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=100, compaction requested=false 2023-05-31 10:54:09,879 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:09,880 DEBUG [M:0;jenkins-hbase20:39993] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:54:09,880 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/MasterData/WALs/jenkins-hbase20.apache.org,39993,1685530364309 2023-05-31 10:54:09,884 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:54:09,884 INFO [M:0;jenkins-hbase20:39993] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 10:54:09,885 INFO [M:0;jenkins-hbase20:39993] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39993 2023-05-31 10:54:09,886 DEBUG [M:0;jenkins-hbase20:39993] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,39993,1685530364309 already deleted, retry=false 2023-05-31 10:54:09,939 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:09,939 INFO [RS:0;jenkins-hbase20:41383] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,41383,1685530365253; zookeeper connection closed. 2023-05-31 10:54:09,939 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): regionserver:41383-0x101a1265ec50001, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:09,940 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@169c2377] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@169c2377 2023-05-31 10:54:09,940 INFO [Listener at localhost.localdomain/44683] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 10:54:10,039 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:10,039 DEBUG [Listener at localhost.localdomain/44683-EventThread] zookeeper.ZKWatcher(600): master:39993-0x101a1265ec50000, quorum=127.0.0.1:58368, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:10,039 INFO [M:0;jenkins-hbase20:39993] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39993,1685530364309; zookeeper connection closed. 2023-05-31 10:54:10,045 WARN [Listener at localhost.localdomain/44683] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:10,051 INFO [Listener at localhost.localdomain/44683] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:10,160 WARN [BP-447361426-148.251.75.209-1685530361587 heartbeating to localhost.localdomain/127.0.0.1:40463] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:10,160 WARN [BP-447361426-148.251.75.209-1685530361587 heartbeating to localhost.localdomain/127.0.0.1:40463] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-447361426-148.251.75.209-1685530361587 (Datanode Uuid d893772c-8efc-4b90-81ee-7c89c8ad679b) service to localhost.localdomain/127.0.0.1:40463 2023-05-31 10:54:10,162 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/dfs/data/data3/current/BP-447361426-148.251.75.209-1685530361587] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:10,163 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/dfs/data/data4/current/BP-447361426-148.251.75.209-1685530361587] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:10,164 WARN [Listener at localhost.localdomain/44683] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:10,166 INFO [Listener at localhost.localdomain/44683] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:10,276 WARN [BP-447361426-148.251.75.209-1685530361587 heartbeating to localhost.localdomain/127.0.0.1:40463] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:10,276 WARN [BP-447361426-148.251.75.209-1685530361587 heartbeating to localhost.localdomain/127.0.0.1:40463] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-447361426-148.251.75.209-1685530361587 (Datanode Uuid 82a51e5b-7f62-4103-9d7d-c3443b595fa0) service to localhost.localdomain/127.0.0.1:40463 2023-05-31 10:54:10,277 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/dfs/data/data1/current/BP-447361426-148.251.75.209-1685530361587] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:10,278 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/cluster_17952b38-5a7b-70a9-5139-5c68d8ddebfd/dfs/data/data2/current/BP-447361426-148.251.75.209-1685530361587] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:10,307 INFO [Listener at localhost.localdomain/44683] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 10:54:10,384 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:54:10,424 INFO [Listener at localhost.localdomain/44683] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 10:54:10,458 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 10:54:10,467 INFO [Listener at localhost.localdomain/44683] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=50 (was 10) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:40463 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/44683 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40463 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40463 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40463 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@1d8b4655 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:40463 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=442 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=164 (was 297), ProcessCount=168 (was 168), AvailableMemoryMB=8982 (was 10205) 2023-05-31 10:54:10,474 INFO [Listener at localhost.localdomain/44683] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=51, OpenFileDescriptor=442, MaxFileDescriptor=60000, SystemLoadAverage=164, ProcessCount=168, AvailableMemoryMB=8982 2023-05-31 10:54:10,474 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 10:54:10,474 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/hadoop.log.dir so I do NOT create it in target/test-data/76149b96-4216-6c9f-e648-515856c74bd0 2023-05-31 10:54:10,474 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5372a6a0-dbd5-6247-ab2e-fd255a5d0df2/hadoop.tmp.dir so I do NOT create it in target/test-data/76149b96-4216-6c9f-e648-515856c74bd0 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d, deleteOnExit=true 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/test.cache.data in system properties and HBase conf 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/hadoop.log.dir in system properties and HBase conf 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 10:54:10,475 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 10:54:10,475 DEBUG [Listener at localhost.localdomain/44683] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:54:10,476 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:54:10,477 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 10:54:10,477 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/nfs.dump.dir in system properties and HBase conf 2023-05-31 10:54:10,477 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir in system properties and HBase conf 2023-05-31 10:54:10,477 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:54:10,477 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 10:54:10,477 INFO [Listener at localhost.localdomain/44683] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 10:54:10,479 WARN [Listener at localhost.localdomain/44683] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:54:10,480 WARN [Listener at localhost.localdomain/44683] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:54:10,480 WARN [Listener at localhost.localdomain/44683] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:54:10,505 WARN [Listener at localhost.localdomain/44683] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:10,507 INFO [Listener at localhost.localdomain/44683] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:10,511 INFO [Listener at localhost.localdomain/44683] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_localdomain_34685_hdfs____.a3jbsk/webapp 2023-05-31 10:54:10,584 INFO [Listener at localhost.localdomain/44683] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34685 2023-05-31 10:54:10,586 WARN [Listener at localhost.localdomain/44683] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:54:10,587 WARN [Listener at localhost.localdomain/44683] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:54:10,587 WARN [Listener at localhost.localdomain/44683] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:54:10,612 WARN [Listener at localhost.localdomain/40701] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:10,622 WARN [Listener at localhost.localdomain/40701] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:10,625 WARN [Listener at localhost.localdomain/40701] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:10,626 INFO [Listener at localhost.localdomain/40701] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:10,632 INFO [Listener at localhost.localdomain/40701] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_42513_datanode____qyqdz4/webapp 2023-05-31 10:54:10,711 INFO [Listener at localhost.localdomain/40701] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42513 2023-05-31 10:54:10,719 WARN [Listener at localhost.localdomain/37339] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:10,733 WARN [Listener at localhost.localdomain/37339] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:10,735 WARN [Listener at localhost.localdomain/37339] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:10,737 INFO [Listener at localhost.localdomain/37339] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:10,741 INFO [Listener at localhost.localdomain/37339] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_37919_datanode____wzc2hw/webapp 2023-05-31 10:54:10,799 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7e6d332604edcf4: Processing first storage report for DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e from datanode d53feb71-23ee-4362-8b61-d224d3c03f89 2023-05-31 10:54:10,800 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7e6d332604edcf4: from storage DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e node DatanodeRegistration(127.0.0.1:45025, datanodeUuid=d53feb71-23ee-4362-8b61-d224d3c03f89, infoPort=37065, infoSecurePort=0, ipcPort=37339, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:10,800 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7e6d332604edcf4: Processing first storage report for DS-1cdf2598-f288-404a-bacd-79d8f137c5f4 from datanode d53feb71-23ee-4362-8b61-d224d3c03f89 2023-05-31 10:54:10,800 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7e6d332604edcf4: from storage DS-1cdf2598-f288-404a-bacd-79d8f137c5f4 node DatanodeRegistration(127.0.0.1:45025, datanodeUuid=d53feb71-23ee-4362-8b61-d224d3c03f89, infoPort=37065, infoSecurePort=0, ipcPort=37339, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:10,830 INFO [Listener at localhost.localdomain/37339] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37919 2023-05-31 10:54:10,847 WARN [Listener at localhost.localdomain/39713] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:10,908 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfa1dbbd139a9d862: Processing first storage report for DS-cb5ed5e0-1d41-4571-b900-44df063c6309 from datanode 9223e1d3-bae9-4b53-8e62-9b61979c3624 2023-05-31 10:54:10,908 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfa1dbbd139a9d862: from storage DS-cb5ed5e0-1d41-4571-b900-44df063c6309 node DatanodeRegistration(127.0.0.1:43823, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=36715, infoSecurePort=0, ipcPort=39713, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:10,909 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfa1dbbd139a9d862: Processing first storage report for DS-a534d100-5812-4fb6-92ca-b1d7e489192c from datanode 9223e1d3-bae9-4b53-8e62-9b61979c3624 2023-05-31 10:54:10,909 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfa1dbbd139a9d862: from storage DS-a534d100-5812-4fb6-92ca-b1d7e489192c node DatanodeRegistration(127.0.0.1:43823, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=36715, infoSecurePort=0, ipcPort=39713, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:10,960 DEBUG [Listener at localhost.localdomain/39713] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0 2023-05-31 10:54:10,964 INFO [Listener at localhost.localdomain/39713] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/zookeeper_0, clientPort=60520, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 10:54:10,966 INFO [Listener at localhost.localdomain/39713] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60520 2023-05-31 10:54:10,966 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:10,967 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:10,984 INFO [Listener at localhost.localdomain/39713] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686 with version=8 2023-05-31 10:54:10,985 INFO [Listener at localhost.localdomain/39713] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase-staging 2023-05-31 10:54:10,987 INFO [Listener at localhost.localdomain/39713] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:54:10,987 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:10,988 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:10,988 INFO [Listener at localhost.localdomain/39713] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:54:10,988 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:10,988 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:54:10,988 INFO [Listener at localhost.localdomain/39713] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:54:10,990 INFO [Listener at localhost.localdomain/39713] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36473 2023-05-31 10:54:10,990 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:10,991 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:10,993 INFO [Listener at localhost.localdomain/39713] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36473 connecting to ZooKeeper ensemble=127.0.0.1:60520 2023-05-31 10:54:10,997 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:364730x0, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:54:10,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36473-0x101a127b43e0000 connected 2023-05-31 10:54:11,008 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:54:11,009 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:11,009 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:54:11,014 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36473 2023-05-31 10:54:11,014 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36473 2023-05-31 10:54:11,016 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36473 2023-05-31 10:54:11,016 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36473 2023-05-31 10:54:11,017 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36473 2023-05-31 10:54:11,017 INFO [Listener at localhost.localdomain/39713] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686, hbase.cluster.distributed=false 2023-05-31 10:54:11,031 INFO [Listener at localhost.localdomain/39713] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:54:11,031 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:11,031 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:11,031 INFO [Listener at localhost.localdomain/39713] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:54:11,031 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:11,032 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:54:11,032 INFO [Listener at localhost.localdomain/39713] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:54:11,033 INFO [Listener at localhost.localdomain/39713] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39337 2023-05-31 10:54:11,033 INFO [Listener at localhost.localdomain/39713] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:54:11,034 DEBUG [Listener at localhost.localdomain/39713] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:54:11,035 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:11,036 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:11,037 INFO [Listener at localhost.localdomain/39713] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39337 connecting to ZooKeeper ensemble=127.0.0.1:60520 2023-05-31 10:54:11,048 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:393370x0, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:54:11,050 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): regionserver:393370x0, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:54:11,050 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39337-0x101a127b43e0001 connected 2023-05-31 10:54:11,051 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:11,051 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:54:11,052 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39337 2023-05-31 10:54:11,052 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39337 2023-05-31 10:54:11,052 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39337 2023-05-31 10:54:11,053 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39337 2023-05-31 10:54:11,053 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39337 2023-05-31 10:54:11,054 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,059 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:54:11,059 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,060 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:54:11,060 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,060 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:54:11,061 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:54:11,063 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,36473,1685530450986 from backup master directory 2023-05-31 10:54:11,063 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:54:11,064 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,064 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:54:11,064 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,064 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:54:11,084 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/hbase.id with ID: 0604d568-9acc-4509-a995-efab8b674f04 2023-05-31 10:54:11,100 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:11,103 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,118 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x75f23b94 to 127.0.0.1:60520 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:54:11,124 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@180b4ee6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:54:11,124 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:54:11,125 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 10:54:11,125 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:54:11,127 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store-tmp 2023-05-31 10:54:11,137 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:11,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:54:11,138 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:11,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:11,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:54:11,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:11,138 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:11,138 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:54:11,139 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,142 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36473%2C1685530450986, suffix=, logDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986, archiveDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/oldWALs, maxLogs=10 2023-05-31 10:54:11,149 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986/jenkins-hbase20.apache.org%2C36473%2C1685530450986.1685530451142 2023-05-31 10:54:11,149 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] 2023-05-31 10:54:11,149 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:54:11,149 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:11,149 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:54:11,149 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:54:11,151 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:54:11,153 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 10:54:11,154 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 10:54:11,154 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,156 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:54:11,156 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:54:11,160 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:54:11,163 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:54:11,164 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=797721, jitterRate=0.014355197548866272}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:54:11,164 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:54:11,164 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 10:54:11,166 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 10:54:11,166 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 10:54:11,167 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 10:54:11,167 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 10:54:11,168 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 10:54:11,168 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 10:54:11,170 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 10:54:11,172 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 10:54:11,187 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 10:54:11,187 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 10:54:11,188 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 10:54:11,188 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 10:54:11,189 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 10:54:11,191 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,192 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 10:54:11,192 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 10:54:11,193 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 10:54:11,194 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:11,194 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:11,194 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,194 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,36473,1685530450986, sessionid=0x101a127b43e0000, setting cluster-up flag (Was=false) 2023-05-31 10:54:11,197 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,200 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 10:54:11,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,203 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,206 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 10:54:11,207 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:11,208 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.hbase-snapshot/.tmp 2023-05-31 10:54:11,210 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:54:11,211 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685530481215 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 10:54:11,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,216 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 10:54:11,216 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 10:54:11,216 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 10:54:11,216 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:54:11,216 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 10:54:11,217 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 10:54:11,217 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 10:54:11,218 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530451217,5,FailOnTimeoutGroup] 2023-05-31 10:54:11,218 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:54:11,218 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530451218,5,FailOnTimeoutGroup] 2023-05-31 10:54:11,218 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,218 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 10:54:11,218 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,218 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,236 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:54:11,237 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:54:11,237 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686 2023-05-31 10:54:11,247 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:11,249 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:54:11,251 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/info 2023-05-31 10:54:11,251 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:54:11,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:54:11,253 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:54:11,254 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:54:11,255 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,255 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:54:11,255 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(951): ClusterId : 0604d568-9acc-4509-a995-efab8b674f04 2023-05-31 10:54:11,256 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:54:11,258 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/table 2023-05-31 10:54:11,258 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:54:11,258 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:54:11,259 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:54:11,260 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,261 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:54:11,262 DEBUG [RS:0;jenkins-hbase20:39337] zookeeper.ReadOnlyZKClient(139): Connect 0x6ff56df5 to 127.0.0.1:60520 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:54:11,262 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740 2023-05-31 10:54:11,263 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740 2023-05-31 10:54:11,265 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:54:11,267 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:54:11,270 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:54:11,271 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=720481, jitterRate=-0.08386120200157166}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:54:11,271 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:54:11,271 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:54:11,271 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:54:11,271 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:54:11,271 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:54:11,271 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:54:11,272 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:54:11,272 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:54:11,274 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:54:11,274 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 10:54:11,274 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 10:54:11,274 DEBUG [RS:0;jenkins-hbase20:39337] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61328e7b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:54:11,275 DEBUG [RS:0;jenkins-hbase20:39337] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@650d0685, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:54:11,277 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 10:54:11,279 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 10:54:11,285 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:39337 2023-05-31 10:54:11,286 INFO [RS:0;jenkins-hbase20:39337] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:54:11,286 INFO [RS:0;jenkins-hbase20:39337] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:54:11,286 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:54:11,287 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,36473,1685530450986 with isa=jenkins-hbase20.apache.org/148.251.75.209:39337, startcode=1685530451030 2023-05-31 10:54:11,287 DEBUG [RS:0;jenkins-hbase20:39337] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:54:11,291 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59029, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:54:11,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,292 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686 2023-05-31 10:54:11,292 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40701 2023-05-31 10:54:11,292 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:54:11,293 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:11,294 DEBUG [RS:0;jenkins-hbase20:39337] zookeeper.ZKUtil(162): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,294 WARN [RS:0;jenkins-hbase20:39337] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:54:11,294 INFO [RS:0;jenkins-hbase20:39337] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:54:11,294 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,295 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,39337,1685530451030] 2023-05-31 10:54:11,298 DEBUG [RS:0;jenkins-hbase20:39337] zookeeper.ZKUtil(162): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,299 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:54:11,299 INFO [RS:0;jenkins-hbase20:39337] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:54:11,302 INFO [RS:0;jenkins-hbase20:39337] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:54:11,302 INFO [RS:0;jenkins-hbase20:39337] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:54:11,302 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,306 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:54:11,307 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,308 DEBUG [RS:0;jenkins-hbase20:39337] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:11,309 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,309 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,309 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,320 INFO [RS:0;jenkins-hbase20:39337] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:54:11,320 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39337,1685530451030-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,330 INFO [RS:0;jenkins-hbase20:39337] regionserver.Replication(203): jenkins-hbase20.apache.org,39337,1685530451030 started 2023-05-31 10:54:11,330 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,39337,1685530451030, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:39337, sessionid=0x101a127b43e0001 2023-05-31 10:54:11,330 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:54:11,330 DEBUG [RS:0;jenkins-hbase20:39337] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,330 DEBUG [RS:0;jenkins-hbase20:39337] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39337,1685530451030' 2023-05-31 10:54:11,330 DEBUG [RS:0;jenkins-hbase20:39337] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:54:11,331 DEBUG [RS:0;jenkins-hbase20:39337] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:54:11,332 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:54:11,332 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:54:11,332 DEBUG [RS:0;jenkins-hbase20:39337] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,332 DEBUG [RS:0;jenkins-hbase20:39337] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39337,1685530451030' 2023-05-31 10:54:11,332 DEBUG [RS:0;jenkins-hbase20:39337] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:54:11,332 DEBUG [RS:0;jenkins-hbase20:39337] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:54:11,333 DEBUG [RS:0;jenkins-hbase20:39337] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:54:11,333 INFO [RS:0;jenkins-hbase20:39337] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:54:11,333 INFO [RS:0;jenkins-hbase20:39337] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:54:11,429 DEBUG [jenkins-hbase20:36473] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 10:54:11,430 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39337,1685530451030, state=OPENING 2023-05-31 10:54:11,431 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 10:54:11,432 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:11,433 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:54:11,433 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39337,1685530451030}] 2023-05-31 10:54:11,436 INFO [RS:0;jenkins-hbase20:39337] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39337%2C1685530451030, suffix=, logDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030, archiveDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/oldWALs, maxLogs=32 2023-05-31 10:54:11,450 INFO [RS:0;jenkins-hbase20:39337] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.1685530451438 2023-05-31 10:54:11,451 DEBUG [RS:0;jenkins-hbase20:39337] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK], DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]] 2023-05-31 10:54:11,587 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,587 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:54:11,589 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49548, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:54:11,595 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 10:54:11,595 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:54:11,598 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030, archiveDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/oldWALs, maxLogs=32 2023-05-31 10:54:11,613 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta.1685530451600.meta 2023-05-31 10:54:11,613 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] 2023-05-31 10:54:11,613 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:54:11,613 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 10:54:11,613 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 10:54:11,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 10:54:11,614 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 10:54:11,615 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:11,615 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 10:54:11,615 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 10:54:11,617 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:54:11,619 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/info 2023-05-31 10:54:11,619 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/info 2023-05-31 10:54:11,619 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:54:11,620 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,620 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:54:11,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:54:11,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:54:11,622 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:54:11,623 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,623 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:54:11,624 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/table 2023-05-31 10:54:11,624 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740/table 2023-05-31 10:54:11,626 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:54:11,627 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:11,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740 2023-05-31 10:54:11,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/meta/1588230740 2023-05-31 10:54:11,634 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:54:11,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:54:11,638 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=713024, jitterRate=-0.09334428608417511}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:54:11,638 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:54:11,640 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685530451587 2023-05-31 10:54:11,645 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 10:54:11,645 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 10:54:11,646 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39337,1685530451030, state=OPEN 2023-05-31 10:54:11,648 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 10:54:11,648 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:54:11,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 10:54:11,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39337,1685530451030 in 216 msec 2023-05-31 10:54:11,656 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 10:54:11,656 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 377 msec 2023-05-31 10:54:11,659 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 448 msec 2023-05-31 10:54:11,660 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685530451660, completionTime=-1 2023-05-31 10:54:11,660 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 10:54:11,660 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 10:54:11,663 DEBUG [hconnection-0x5faf4205-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:54:11,665 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49554, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:54:11,666 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 10:54:11,666 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685530511666 2023-05-31 10:54:11,666 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685530571666 2023-05-31 10:54:11,666 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 10:54:11,671 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36473,1685530450986-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,672 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36473,1685530450986-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,672 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36473,1685530450986-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,672 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:36473, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,672 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:11,672 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 10:54:11,672 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:54:11,673 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 10:54:11,673 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 10:54:11,675 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:54:11,677 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:54:11,679 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/hbase/namespace/f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:11,680 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/hbase/namespace/f519bde9341dc78b72a39524405e362b empty. 2023-05-31 10:54:11,680 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/hbase/namespace/f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:11,680 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 10:54:11,698 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 10:54:11,700 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f519bde9341dc78b72a39524405e362b, NAME => 'hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp 2023-05-31 10:54:11,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:11,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f519bde9341dc78b72a39524405e362b, disabling compactions & flushes 2023-05-31 10:54:11,713 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:11,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:11,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. after waiting 0 ms 2023-05-31 10:54:11,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:11,713 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:11,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f519bde9341dc78b72a39524405e362b: 2023-05-31 10:54:11,717 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:54:11,719 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530451718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530451718"}]},"ts":"1685530451718"} 2023-05-31 10:54:11,722 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:54:11,723 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:54:11,723 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530451723"}]},"ts":"1685530451723"} 2023-05-31 10:54:11,726 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 10:54:11,730 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f519bde9341dc78b72a39524405e362b, ASSIGN}] 2023-05-31 10:54:11,733 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f519bde9341dc78b72a39524405e362b, ASSIGN 2023-05-31 10:54:11,734 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f519bde9341dc78b72a39524405e362b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39337,1685530451030; forceNewPlan=false, retain=false 2023-05-31 10:54:11,885 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f519bde9341dc78b72a39524405e362b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:11,886 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530451885"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530451885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530451885"}]},"ts":"1685530451885"} 2023-05-31 10:54:11,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure f519bde9341dc78b72a39524405e362b, server=jenkins-hbase20.apache.org,39337,1685530451030}] 2023-05-31 10:54:12,054 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:12,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f519bde9341dc78b72a39524405e362b, NAME => 'hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:54:12,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:12,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,058 INFO [StoreOpener-f519bde9341dc78b72a39524405e362b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,060 DEBUG [StoreOpener-f519bde9341dc78b72a39524405e362b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/namespace/f519bde9341dc78b72a39524405e362b/info 2023-05-31 10:54:12,060 DEBUG [StoreOpener-f519bde9341dc78b72a39524405e362b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/namespace/f519bde9341dc78b72a39524405e362b/info 2023-05-31 10:54:12,060 INFO [StoreOpener-f519bde9341dc78b72a39524405e362b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f519bde9341dc78b72a39524405e362b columnFamilyName info 2023-05-31 10:54:12,061 INFO [StoreOpener-f519bde9341dc78b72a39524405e362b-1] regionserver.HStore(310): Store=f519bde9341dc78b72a39524405e362b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:12,063 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/namespace/f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,064 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/namespace/f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,067 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:12,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/hbase/namespace/f519bde9341dc78b72a39524405e362b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:54:12,070 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f519bde9341dc78b72a39524405e362b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=808826, jitterRate=0.028476014733314514}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:54:12,070 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f519bde9341dc78b72a39524405e362b: 2023-05-31 10:54:12,072 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b., pid=6, masterSystemTime=1685530452043 2023-05-31 10:54:12,074 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:12,074 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:12,075 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f519bde9341dc78b72a39524405e362b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:12,076 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530452075"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530452075"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530452075"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530452075"}]},"ts":"1685530452075"} 2023-05-31 10:54:12,081 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 10:54:12,081 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure f519bde9341dc78b72a39524405e362b, server=jenkins-hbase20.apache.org,39337,1685530451030 in 189 msec 2023-05-31 10:54:12,083 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 10:54:12,084 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f519bde9341dc78b72a39524405e362b, ASSIGN in 351 msec 2023-05-31 10:54:12,084 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:54:12,085 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530452085"}]},"ts":"1685530452085"} 2023-05-31 10:54:12,087 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 10:54:12,089 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:54:12,091 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 417 msec 2023-05-31 10:54:12,175 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 10:54:12,176 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:54:12,176 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:12,182 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 10:54:12,194 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:54:12,200 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 17 msec 2023-05-31 10:54:12,205 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 10:54:12,218 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:54:12,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-05-31 10:54:12,233 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 10:54:12,234 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 10:54:12,235 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.171sec 2023-05-31 10:54:12,235 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 10:54:12,235 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 10:54:12,235 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 10:54:12,235 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36473,1685530450986-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 10:54:12,235 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36473,1685530450986-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 10:54:12,238 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 10:54:12,256 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ReadOnlyZKClient(139): Connect 0x13adcee8 to 127.0.0.1:60520 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:54:12,263 DEBUG [Listener at localhost.localdomain/39713] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@480ad75b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:54:12,267 DEBUG [hconnection-0x86ac88a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:54:12,270 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49556, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:54:12,273 INFO [Listener at localhost.localdomain/39713] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:12,273 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:12,277 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 10:54:12,277 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:12,278 INFO [Listener at localhost.localdomain/39713] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 10:54:12,290 INFO [Listener at localhost.localdomain/39713] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:54:12,291 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:12,291 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:12,291 INFO [Listener at localhost.localdomain/39713] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:54:12,291 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:54:12,291 INFO [Listener at localhost.localdomain/39713] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:54:12,291 INFO [Listener at localhost.localdomain/39713] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:54:12,293 INFO [Listener at localhost.localdomain/39713] ipc.NettyRpcServer(120): Bind to /148.251.75.209:40605 2023-05-31 10:54:12,293 INFO [Listener at localhost.localdomain/39713] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:54:12,295 DEBUG [Listener at localhost.localdomain/39713] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:54:12,295 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:12,297 INFO [Listener at localhost.localdomain/39713] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:54:12,299 INFO [Listener at localhost.localdomain/39713] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40605 connecting to ZooKeeper ensemble=127.0.0.1:60520 2023-05-31 10:54:12,303 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:406050x0, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:54:12,304 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(162): regionserver:406050x0, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:54:12,305 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40605-0x101a127b43e0005 connected 2023-05-31 10:54:12,306 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(162): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-31 10:54:12,307 DEBUG [Listener at localhost.localdomain/39713] zookeeper.ZKUtil(164): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:54:12,310 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40605 2023-05-31 10:54:12,310 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40605 2023-05-31 10:54:12,310 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40605 2023-05-31 10:54:12,311 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40605 2023-05-31 10:54:12,312 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40605 2023-05-31 10:54:12,316 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(951): ClusterId : 0604d568-9acc-4509-a995-efab8b674f04 2023-05-31 10:54:12,316 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:54:12,326 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:54:12,326 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:54:12,332 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:54:12,333 DEBUG [RS:1;jenkins-hbase20:40605] zookeeper.ReadOnlyZKClient(139): Connect 0x2f63234a to 127.0.0.1:60520 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:54:12,343 DEBUG [RS:1;jenkins-hbase20:40605] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21a4d8d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:54:12,343 DEBUG [RS:1;jenkins-hbase20:40605] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@259b483a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:54:12,352 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:40605 2023-05-31 10:54:12,352 INFO [RS:1;jenkins-hbase20:40605] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:54:12,353 INFO [RS:1;jenkins-hbase20:40605] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:54:12,353 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:54:12,353 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,36473,1685530450986 with isa=jenkins-hbase20.apache.org/148.251.75.209:40605, startcode=1685530452290 2023-05-31 10:54:12,353 DEBUG [RS:1;jenkins-hbase20:40605] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:54:12,356 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47835, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:54:12,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,357 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686 2023-05-31 10:54:12,357 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40701 2023-05-31 10:54:12,357 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:54:12,358 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:12,358 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:12,358 DEBUG [RS:1;jenkins-hbase20:40605] zookeeper.ZKUtil(162): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,358 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,40605,1685530452290] 2023-05-31 10:54:12,358 WARN [RS:1;jenkins-hbase20:40605] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:54:12,358 INFO [RS:1;jenkins-hbase20:40605] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:54:12,359 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:12,359 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,359 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,363 DEBUG [RS:1;jenkins-hbase20:40605] zookeeper.ZKUtil(162): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:12,364 DEBUG [RS:1;jenkins-hbase20:40605] zookeeper.ZKUtil(162): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,364 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:54:12,365 INFO [RS:1;jenkins-hbase20:40605] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:54:12,368 INFO [RS:1;jenkins-hbase20:40605] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:54:12,368 INFO [RS:1;jenkins-hbase20:40605] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:54:12,369 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:12,369 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:54:12,370 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:12,370 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,370 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,370 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 DEBUG [RS:1;jenkins-hbase20:40605] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:54:12,371 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:12,372 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:12,372 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:12,382 INFO [RS:1;jenkins-hbase20:40605] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:54:12,383 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40605,1685530452290-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:54:12,392 INFO [RS:1;jenkins-hbase20:40605] regionserver.Replication(203): jenkins-hbase20.apache.org,40605,1685530452290 started 2023-05-31 10:54:12,392 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,40605,1685530452290, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:40605, sessionid=0x101a127b43e0005 2023-05-31 10:54:12,392 INFO [Listener at localhost.localdomain/39713] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase20:40605,5,FailOnTimeoutGroup] 2023-05-31 10:54:12,392 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:54:12,392 INFO [Listener at localhost.localdomain/39713] wal.TestLogRolling(323): Replication=2 2023-05-31 10:54:12,392 DEBUG [RS:1;jenkins-hbase20:40605] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,392 DEBUG [RS:1;jenkins-hbase20:40605] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,40605,1685530452290' 2023-05-31 10:54:12,393 DEBUG [RS:1;jenkins-hbase20:40605] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:54:12,394 DEBUG [RS:1;jenkins-hbase20:40605] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:54:12,394 DEBUG [Listener at localhost.localdomain/39713] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 10:54:12,395 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:54:12,395 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:54:12,395 DEBUG [RS:1;jenkins-hbase20:40605] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,395 DEBUG [RS:1;jenkins-hbase20:40605] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,40605,1685530452290' 2023-05-31 10:54:12,395 DEBUG [RS:1;jenkins-hbase20:40605] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:54:12,396 DEBUG [RS:1;jenkins-hbase20:40605] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:54:12,396 DEBUG [RS:1;jenkins-hbase20:40605] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:54:12,396 INFO [RS:1;jenkins-hbase20:40605] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:54:12,397 INFO [RS:1;jenkins-hbase20:40605] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:54:12,398 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34000, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 10:54:12,399 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 10:54:12,399 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 10:54:12,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:54:12,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-31 10:54:12,403 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:54:12,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-31 10:54:12,405 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:54:12,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:54:12,407 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,407 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9 empty. 2023-05-31 10:54:12,408 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,408 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-31 10:54:12,422 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-31 10:54:12,423 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8ee0c48ad9305e3f997566911a7479e9, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/.tmp 2023-05-31 10:54:12,434 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:12,434 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 8ee0c48ad9305e3f997566911a7479e9, disabling compactions & flushes 2023-05-31 10:54:12,434 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,434 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,434 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. after waiting 0 ms 2023-05-31 10:54:12,434 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,434 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,434 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 8ee0c48ad9305e3f997566911a7479e9: 2023-05-31 10:54:12,438 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:54:12,440 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685530452440"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530452440"}]},"ts":"1685530452440"} 2023-05-31 10:54:12,442 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:54:12,443 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:54:12,444 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530452443"}]},"ts":"1685530452443"} 2023-05-31 10:54:12,445 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-31 10:54:12,451 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-05-31 10:54:12,453 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-31 10:54:12,453 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-31 10:54:12,453 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-31 10:54:12,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8ee0c48ad9305e3f997566911a7479e9, ASSIGN}] 2023-05-31 10:54:12,456 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8ee0c48ad9305e3f997566911a7479e9, ASSIGN 2023-05-31 10:54:12,457 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8ee0c48ad9305e3f997566911a7479e9, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,40605,1685530452290; forceNewPlan=false, retain=false 2023-05-31 10:54:12,500 INFO [RS:1;jenkins-hbase20:40605] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C40605%2C1685530452290, suffix=, logDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290, archiveDir=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/oldWALs, maxLogs=32 2023-05-31 10:54:12,519 INFO [RS:1;jenkins-hbase20:40605] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502 2023-05-31 10:54:12,519 DEBUG [RS:1;jenkins-hbase20:40605] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] 2023-05-31 10:54:12,613 INFO [jenkins-hbase20:36473] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-31 10:54:12,614 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8ee0c48ad9305e3f997566911a7479e9, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,614 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685530452614"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530452614"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530452614"}]},"ts":"1685530452614"} 2023-05-31 10:54:12,617 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 8ee0c48ad9305e3f997566911a7479e9, server=jenkins-hbase20.apache.org,40605,1685530452290}] 2023-05-31 10:54:12,772 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,772 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:54:12,778 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36210, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:54:12,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8ee0c48ad9305e3f997566911a7479e9, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:54:12,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:54:12,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,791 INFO [StoreOpener-8ee0c48ad9305e3f997566911a7479e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,792 DEBUG [StoreOpener-8ee0c48ad9305e3f997566911a7479e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info 2023-05-31 10:54:12,792 DEBUG [StoreOpener-8ee0c48ad9305e3f997566911a7479e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info 2023-05-31 10:54:12,793 INFO [StoreOpener-8ee0c48ad9305e3f997566911a7479e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8ee0c48ad9305e3f997566911a7479e9 columnFamilyName info 2023-05-31 10:54:12,793 INFO [StoreOpener-8ee0c48ad9305e3f997566911a7479e9-1] regionserver.HStore(310): Store=8ee0c48ad9305e3f997566911a7479e9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:54:12,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:12,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:54:12,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8ee0c48ad9305e3f997566911a7479e9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=710378, jitterRate=-0.09670868515968323}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:54:12,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8ee0c48ad9305e3f997566911a7479e9: 2023-05-31 10:54:12,806 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9., pid=11, masterSystemTime=1685530452772 2023-05-31 10:54:12,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:12,812 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8ee0c48ad9305e3f997566911a7479e9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:12,812 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685530452812"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530452812"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530452812"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530452812"}]},"ts":"1685530452812"} 2023-05-31 10:54:12,818 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 10:54:12,818 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 8ee0c48ad9305e3f997566911a7479e9, server=jenkins-hbase20.apache.org,40605,1685530452290 in 198 msec 2023-05-31 10:54:12,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 10:54:12,821 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=8ee0c48ad9305e3f997566911a7479e9, ASSIGN in 365 msec 2023-05-31 10:54:12,822 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:54:12,822 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530452822"}]},"ts":"1685530452822"} 2023-05-31 10:54:12,823 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-31 10:54:12,826 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:54:12,828 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 426 msec 2023-05-31 10:54:15,202 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 10:54:17,300 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 10:54:17,300 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 10:54:18,365 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-31 10:54:22,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:54:22,407 INFO [Listener at localhost.localdomain/39713] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-31 10:54:22,410 DEBUG [Listener at localhost.localdomain/39713] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-31 10:54:22,410 DEBUG [Listener at localhost.localdomain/39713] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:22,426 WARN [Listener at localhost.localdomain/39713] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:22,429 WARN [Listener at localhost.localdomain/39713] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:22,430 INFO [Listener at localhost.localdomain/39713] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:22,435 INFO [Listener at localhost.localdomain/39713] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_46513_datanode____.6t3osk/webapp 2023-05-31 10:54:22,521 INFO [Listener at localhost.localdomain/39713] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46513 2023-05-31 10:54:22,536 WARN [Listener at localhost.localdomain/44411] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:22,554 WARN [Listener at localhost.localdomain/44411] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:22,556 WARN [Listener at localhost.localdomain/44411] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:22,557 INFO [Listener at localhost.localdomain/44411] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:22,561 INFO [Listener at localhost.localdomain/44411] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_42143_datanode____.qtdgbr/webapp 2023-05-31 10:54:22,678 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xafeeae390422081: Processing first storage report for DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7 from datanode 2b8688d7-6ec1-43cd-b9ae-60d0dac66b1e 2023-05-31 10:54:22,678 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xafeeae390422081: from storage DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7 node DatanodeRegistration(127.0.0.1:41755, datanodeUuid=2b8688d7-6ec1-43cd-b9ae-60d0dac66b1e, infoPort=39265, infoSecurePort=0, ipcPort=44411, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:22,678 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xafeeae390422081: Processing first storage report for DS-b9795590-b125-4a30-b228-63d756fdf517 from datanode 2b8688d7-6ec1-43cd-b9ae-60d0dac66b1e 2023-05-31 10:54:22,678 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xafeeae390422081: from storage DS-b9795590-b125-4a30-b228-63d756fdf517 node DatanodeRegistration(127.0.0.1:41755, datanodeUuid=2b8688d7-6ec1-43cd-b9ae-60d0dac66b1e, infoPort=39265, infoSecurePort=0, ipcPort=44411, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:22,695 INFO [Listener at localhost.localdomain/44411] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42143 2023-05-31 10:54:22,706 WARN [Listener at localhost.localdomain/42263] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:22,755 WARN [Listener at localhost.localdomain/42263] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:22,758 WARN [Listener at localhost.localdomain/42263] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:22,759 INFO [Listener at localhost.localdomain/42263] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:22,770 INFO [Listener at localhost.localdomain/42263] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_33389_datanode____p2pyz/webapp 2023-05-31 10:54:22,801 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x131dd25449b65fef: Processing first storage report for DS-77cc8813-db93-4f5d-847f-c26769fb445d from datanode 992cc9e3-7d53-4848-aef9-de0635bde546 2023-05-31 10:54:22,801 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x131dd25449b65fef: from storage DS-77cc8813-db93-4f5d-847f-c26769fb445d node DatanodeRegistration(127.0.0.1:46239, datanodeUuid=992cc9e3-7d53-4848-aef9-de0635bde546, infoPort=35025, infoSecurePort=0, ipcPort=42263, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 10:54:22,802 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x131dd25449b65fef: Processing first storage report for DS-6d86955a-5192-4f01-be14-893646a6236a from datanode 992cc9e3-7d53-4848-aef9-de0635bde546 2023-05-31 10:54:22,802 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x131dd25449b65fef: from storage DS-6d86955a-5192-4f01-be14-893646a6236a node DatanodeRegistration(127.0.0.1:46239, datanodeUuid=992cc9e3-7d53-4848-aef9-de0635bde546, infoPort=35025, infoSecurePort=0, ipcPort=42263, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:22,865 INFO [Listener at localhost.localdomain/42263] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33389 2023-05-31 10:54:22,877 WARN [Listener at localhost.localdomain/36107] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:22,974 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62c5953aa518e4ed: Processing first storage report for DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb from datanode 6fad3492-0bf3-4c5d-a0cc-67dd36abcc25 2023-05-31 10:54:22,974 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62c5953aa518e4ed: from storage DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb node DatanodeRegistration(127.0.0.1:32931, datanodeUuid=6fad3492-0bf3-4c5d-a0cc-67dd36abcc25, infoPort=36083, infoSecurePort=0, ipcPort=36107, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:22,974 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62c5953aa518e4ed: Processing first storage report for DS-cd0b17af-c017-41a2-8d07-508cf67371a7 from datanode 6fad3492-0bf3-4c5d-a0cc-67dd36abcc25 2023-05-31 10:54:22,974 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62c5953aa518e4ed: from storage DS-cd0b17af-c017-41a2-8d07-508cf67371a7 node DatanodeRegistration(127.0.0.1:32931, datanodeUuid=6fad3492-0bf3-4c5d-a0cc-67dd36abcc25, infoPort=36083, infoSecurePort=0, ipcPort=36107, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 10:54:23,007 WARN [Listener at localhost.localdomain/36107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:23,012 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,012 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,013 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,014 WARN [DataStreamer for file /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502 block BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]) is bad. 2023-05-31 10:54:23,015 WARN [DataStreamer for file /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986/jenkins-hbase20.apache.org%2C36473%2C1685530450986.1685530451142 block BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]) is bad. 2023-05-31 10:54:23,015 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 10:54:23,015 WARN [DataStreamer for file /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta.1685530451600.meta block BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]) is bad. 2023-05-31 10:54:23,016 WARN [DataStreamer for file /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.1685530451438 block BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK], DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]) is bad. 2023-05-31 10:54:23,016 WARN [PacketResponder: BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43823]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,021 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:23,024 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:47354 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47354 dst: /127.0.0.1:45025 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:406) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,031 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:47330 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47330 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45025 remote=/127.0.0.1:47330]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,031 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:47366 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47366 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45025 remote=/127.0.0.1:47366]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,032 WARN [PacketResponder: BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45025]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,032 WARN [PacketResponder: BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45025]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,032 WARN [PacketResponder: BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45025]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,031 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:47402 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47402 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45025 remote=/127.0.0.1:47402]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,036 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:39938 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:43823:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39938 dst: /127.0.0.1:43823 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,036 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:39870 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43823:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39870 dst: /127.0.0.1:43823 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,037 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:39900 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43823:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39900 dst: /127.0.0.1:43823 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,126 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:39888 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43823:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39888 dst: /127.0.0.1:43823 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,126 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:23,126 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-541088169-148.251.75.209-1685530450482 (Datanode Uuid 9223e1d3-bae9-4b53-8e62-9b61979c3624) service to localhost.localdomain/127.0.0.1:40701 2023-05-31 10:54:23,128 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data3/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:23,128 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data4/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:23,131 WARN [Listener at localhost.localdomain/36107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:23,131 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,132 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,132 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,132 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:23,137 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:23,239 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:41266 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41266 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,240 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:23,240 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:41250 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41250 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,239 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:41274 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41274 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,239 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:41244 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41244 dst: /127.0.0.1:45025 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:23,241 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-541088169-148.251.75.209-1685530450482 (Datanode Uuid d53feb71-23ee-4362-8b61-d224d3c03f89) service to localhost.localdomain/127.0.0.1:40701 2023-05-31 10:54:23,244 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data1/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:23,244 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data2/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:23,249 DEBUG [Listener at localhost.localdomain/36107] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:54:23,252 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33236, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:54:23,253 WARN [RS:1;jenkins-hbase20:40605.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:23,254 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C40605%2C1685530452290:(num 1685530452502) roll requested 2023-05-31 10:54:23,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40605] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:23,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40605] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:33236 deadline: 1685530473252, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-31 10:54:23,266 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-31 10:54:23,266 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 2023-05-31 10:54:23,266 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK], DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]] 2023-05-31 10:54:23,266 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502 is not closed yet, will try archiving it next time 2023-05-31 10:54:23,266 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:23,266 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:23,269 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502 to hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/oldWALs/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530452502 2023-05-31 10:54:35,299 INFO [Listener at localhost.localdomain/36107] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 2023-05-31 10:54:35,301 WARN [Listener at localhost.localdomain/36107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:35,303 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:54:35,304 WARN [DataStreamer for file /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 block BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK], DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK]) is bad. 2023-05-31 10:54:35,310 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:35,312 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:40538 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:41755:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40538 dst: /127.0.0.1:41755 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41755 remote=/127.0.0.1:40538]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:35,313 WARN [PacketResponder: BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41755]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:35,314 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:39624 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:32931:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39624 dst: /127.0.0.1:32931 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:35,420 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:35,420 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-541088169-148.251.75.209-1685530450482 (Datanode Uuid 6fad3492-0bf3-4c5d-a0cc-67dd36abcc25) service to localhost.localdomain/127.0.0.1:40701 2023-05-31 10:54:35,422 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data9/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:35,422 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data10/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:35,429 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]] 2023-05-31 10:54:35,429 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]] 2023-05-31 10:54:35,429 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C40605%2C1685530452290:(num 1685530463254) roll requested 2023-05-31 10:54:35,440 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530475429 2023-05-31 10:54:35,441 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK], DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]] 2023-05-31 10:54:35,441 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 is not closed yet, will try archiving it next time 2023-05-31 10:54:39,438 WARN [Listener at localhost.localdomain/36107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:39,441 WARN [ResponseProcessor for block BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021 java.io.IOException: Bad response ERROR for BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021 from datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 10:54:39,441 WARN [DataStreamer for file /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530475429 block BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021] hdfs.DataStreamer(1548): Error Recovery for BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK], DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK]) is bad. 2023-05-31 10:54:39,441 WARN [PacketResponder: BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41755]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,442 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34188 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34188 dst: /127.0.0.1:46239 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,450 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:39,555 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:42818 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:41755:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42818 dst: /127.0.0.1:41755 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,559 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:39,559 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-541088169-148.251.75.209-1685530450482 (Datanode Uuid 2b8688d7-6ec1-43cd-b9ae-60d0dac66b1e) service to localhost.localdomain/127.0.0.1:40701 2023-05-31 10:54:39,560 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data5/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:39,560 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data6/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:39,566 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:39,566 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:39,566 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C40605%2C1685530452290:(num 1685530475429) roll requested 2023-05-31 10:54:39,583 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34232 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741841_1023]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741841_1023 to mirror 127.0.0.1:41755: java.net.ConnectException: Connection refused 2023-05-31 10:54:39,584 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741841_1023 2023-05-31 10:54:39,584 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34232 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741841_1023]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34232 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40605] regionserver.HRegion(9158): Flush requested on 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:39,587 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8ee0c48ad9305e3f997566911a7479e9 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:54:39,591 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:39,598 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741842_1024 2023-05-31 10:54:39,602 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK] 2023-05-31 10:54:39,603 WARN [Thread-655] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741843_1025 2023-05-31 10:54:39,604 WARN [Thread-655] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:39,610 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34244 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741844_1026]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741844_1026 to mirror 127.0.0.1:43823: java.net.ConnectException: Connection refused 2023-05-31 10:54:39,610 WARN [Thread-655] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741844_1026 2023-05-31 10:54:39,610 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34244 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741844_1026]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34244 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,611 WARN [Thread-655] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK] 2023-05-31 10:54:39,613 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34256 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741845_1027]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741845_1027 to mirror 127.0.0.1:45025: java.net.ConnectException: Connection refused 2023-05-31 10:54:39,613 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741845_1027 2023-05-31 10:54:39,613 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34256 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741845_1027]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34256 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,614 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK] 2023-05-31 10:54:39,614 WARN [Thread-655] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741846_1028 2023-05-31 10:54:39,615 WARN [Thread-655] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:39,617 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34264 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741847_1029 to mirror 127.0.0.1:32931: java.net.ConnectException: Connection refused 2023-05-31 10:54:39,617 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741847_1029 2023-05-31 10:54:39,617 WARN [Thread-655] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741848_1030 2023-05-31 10:54:39,617 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34264 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34264 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,618 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:39,618 WARN [Thread-655] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK] 2023-05-31 10:54:39,619 WARN [IPC Server handler 2 on default port 40701] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 10:54:39,619 WARN [IPC Server handler 2 on default port 40701] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 10:54:39,619 WARN [IPC Server handler 2 on default port 40701] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 10:54:39,620 WARN [IPC Server handler 2 on default port 40701] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 10:54:39,620 WARN [IPC Server handler 2 on default port 40701] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 10:54:39,620 WARN [IPC Server handler 2 on default port 40701] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 10:54:39,635 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530475429 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530479566 2023-05-31 10:54:39,635 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:39,635 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530475429 is not closed yet, will try archiving it next time 2023-05-31 10:54:39,813 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:39,813 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:39,813 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C40605%2C1685530452290:(num 1685530479566) roll requested 2023-05-31 10:54:39,820 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34296 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741851_1033 to mirror 127.0.0.1:32931: java.net.ConnectException: Connection refused 2023-05-31 10:54:39,820 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741851_1033 2023-05-31 10:54:39,820 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34296 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34296 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,821 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:39,824 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741852_1034 2023-05-31 10:54:39,825 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:39,826 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741853_1035 2023-05-31 10:54:39,827 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:43823,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK] 2023-05-31 10:54:39,830 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34312 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741854_1036]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741854_1036 to mirror 127.0.0.1:45025: java.net.ConnectException: Connection refused 2023-05-31 10:54:39,830 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741854_1036 2023-05-31 10:54:39,830 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:34312 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741854_1036]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34312 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:39,831 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK] 2023-05-31 10:54:39,832 WARN [IPC Server handler 4 on default port 40701] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 10:54:39,832 WARN [IPC Server handler 4 on default port 40701] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 10:54:39,832 WARN [IPC Server handler 4 on default port 40701] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 10:54:39,838 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530479566 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530479813 2023-05-31 10:54:39,838 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:39,838 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530479566 is not closed yet, will try archiving it next time 2023-05-31 10:54:40,017 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-31 10:54:40,035 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/.tmp/info/2a3e6163134749a6af3467ea9e927d27 2023-05-31 10:54:40,050 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/.tmp/info/2a3e6163134749a6af3467ea9e927d27 as hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/2a3e6163134749a6af3467ea9e927d27 2023-05-31 10:54:40,057 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/2a3e6163134749a6af3467ea9e927d27, entries=5, sequenceid=12, filesize=10.0 K 2023-05-31 10:54:40,058 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 8ee0c48ad9305e3f997566911a7479e9 in 471ms, sequenceid=12, compaction requested=false 2023-05-31 10:54:40,059 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8ee0c48ad9305e3f997566911a7479e9: 2023-05-31 10:54:40,225 WARN [Listener at localhost.localdomain/36107] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:40,229 WARN [Listener at localhost.localdomain/36107] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:40,231 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:40,236 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/java.io.tmpdir/Jetty_localhost_40587_datanode____.y32a75/webapp 2023-05-31 10:54:40,241 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 to hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/oldWALs/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530463254 2023-05-31 10:54:40,308 INFO [Listener at localhost.localdomain/36107] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40587 2023-05-31 10:54:40,315 WARN [Listener at localhost.localdomain/37725] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:40,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf0ea981beea6963b: Processing first storage report for DS-cb5ed5e0-1d41-4571-b900-44df063c6309 from datanode 9223e1d3-bae9-4b53-8e62-9b61979c3624 2023-05-31 10:54:40,420 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf0ea981beea6963b: from storage DS-cb5ed5e0-1d41-4571-b900-44df063c6309 node DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 10:54:40,420 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf0ea981beea6963b: Processing first storage report for DS-a534d100-5812-4fb6-92ca-b1d7e489192c from datanode 9223e1d3-bae9-4b53-8e62-9b61979c3624 2023-05-31 10:54:40,420 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf0ea981beea6963b: from storage DS-a534d100-5812-4fb6-92ca-b1d7e489192c node DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:54:40,802 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@39b547ec] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:46239, datanodeUuid=992cc9e3-7d53-4848-aef9-de0635bde546, infoPort=35025, infoSecurePort=0, ipcPort=42263, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741849_1031 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:41,217 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:41,218 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36473%2C1685530450986:(num 1685530451142) roll requested 2023-05-31 10:54:41,224 WARN [Thread-708] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741856_1038 2023-05-31 10:54:41,225 WARN [Thread-708] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK] 2023-05-31 10:54:41,227 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:41,229 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:41,230 WARN [Thread-708] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741857_1039 2023-05-31 10:54:41,230 WARN [Thread-708] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:41,232 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:42784 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741858_1040]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741858_1040 to mirror 127.0.0.1:32931: java.net.ConnectException: Connection refused 2023-05-31 10:54:41,233 WARN [Thread-708] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741858_1040 2023-05-31 10:54:41,233 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:42784 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741858_1040]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42784 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:41,234 WARN [Thread-708] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:41,243 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-31 10:54:41,243 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986/jenkins-hbase20.apache.org%2C36473%2C1685530450986.1685530451142 with entries=88, filesize=43.74 KB; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986/jenkins-hbase20.apache.org%2C36473%2C1685530450986.1685530481218 2023-05-31 10:54:41,243 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39033,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:41,243 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986/jenkins-hbase20.apache.org%2C36473%2C1685530450986.1685530451142 is not closed yet, will try archiving it next time 2023-05-31 10:54:41,243 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:41,244 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986/jenkins-hbase20.apache.org%2C36473%2C1685530450986.1685530451142; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:41,801 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1dbf1ee] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:46239, datanodeUuid=992cc9e3-7d53-4848-aef9-de0635bde546, infoPort=35025, infoSecurePort=0, ipcPort=42263, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741850_1032 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:47,421 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@685a7a45] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741837_1013 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:48,420 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@708504d6] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741831_1007 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:48,420 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@65dab010] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741827_1003 to 127.0.0.1:41755 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:50,421 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5e6976bc] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741828_1004 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:50,421 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5c3fd145] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741826_1002 to 127.0.0.1:41755 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:53,424 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@444dda0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741825_1001 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:53,425 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@342eb708] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741836_1012 to 127.0.0.1:41755 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:54,424 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2167fba7] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741834_1010 to 127.0.0.1:41755 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:54,425 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3f0a337] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39033, datanodeUuid=9223e1d3-bae9-4b53-8e62-9b61979c3624, infoPort=39473, infoSecurePort=0, ipcPort=37725, storageInfo=lv=-57;cid=testClusterID;nsid=101658556;c=1685530450482):Failed to transfer BP-541088169-148.251.75.209-1685530450482:blk_1073741830_1006 to 127.0.0.1:32931 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:58,742 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:55778 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data4/current]'}, localName='127.0.0.1:39033', datanodeUuid='9223e1d3-bae9-4b53-8e62-9b61979c3624', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741860_1042 to mirror 127.0.0.1:41755: java.net.ConnectException: Connection refused 2023-05-31 10:54:58,742 WARN [Thread-728] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741860_1042 2023-05-31 10:54:58,743 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:55778 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:39033:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55778 dst: /127.0.0.1:39033 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:58,744 WARN [Thread-728] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:58,749 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:55784 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741861_1043]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data4/current]'}, localName='127.0.0.1:39033', datanodeUuid='9223e1d3-bae9-4b53-8e62-9b61979c3624', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741861_1043 to mirror 127.0.0.1:32931: java.net.ConnectException: Connection refused 2023-05-31 10:54:58,749 WARN [Thread-728] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741861_1043 2023-05-31 10:54:58,749 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:55784 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741861_1043]] datanode.DataXceiver(323): 127.0.0.1:39033:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55784 dst: /127.0.0.1:39033 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:58,752 WARN [Thread-728] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:58,761 INFO [Listener at localhost.localdomain/37725] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530479813 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530498732 2023-05-31 10:54:58,761 DEBUG [Listener at localhost.localdomain/37725] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK], DatanodeInfoWithStorage[127.0.0.1:39033,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]] 2023-05-31 10:54:58,762 DEBUG [Listener at localhost.localdomain/37725] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290/jenkins-hbase20.apache.org%2C40605%2C1685530452290.1685530479813 is not closed yet, will try archiving it next time 2023-05-31 10:54:58,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40605] regionserver.HRegion(9158): Flush requested on 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:58,767 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 8ee0c48ad9305e3f997566911a7479e9 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-31 10:54:58,768 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-31 10:54:58,779 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:55322 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741863_1045]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741863_1045 to mirror 127.0.0.1:41755: java.net.ConnectException: Connection refused 2023-05-31 10:54:58,779 WARN [Thread-737] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741863_1045 2023-05-31 10:54:58,779 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1373393171_17 at /127.0.0.1:55322 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741863_1045]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55322 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:58,780 WARN [Thread-737] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:58,782 WARN [Thread-737] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741864_1046 2023-05-31 10:54:58,782 WARN [Thread-737] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:58,785 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 10:54:58,785 INFO [Listener at localhost.localdomain/37725] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 10:54:58,785 DEBUG [Listener at localhost.localdomain/37725] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x13adcee8 to 127.0.0.1:60520 2023-05-31 10:54:58,785 DEBUG [Listener at localhost.localdomain/37725] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:58,785 DEBUG [Listener at localhost.localdomain/37725] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 10:54:58,785 DEBUG [Listener at localhost.localdomain/37725] util.JVMClusterUtil(257): Found active master hash=1814483288, stopped=false 2023-05-31 10:54:58,785 INFO [Listener at localhost.localdomain/37725] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:58,787 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:58,787 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:58,787 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:58,787 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:54:58,787 INFO [Listener at localhost.localdomain/37725] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 10:54:58,787 DEBUG [Listener at localhost.localdomain/37725] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75f23b94 to 127.0.0.1:60520 2023-05-31 10:54:58,788 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:58,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:58,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:54:58,789 DEBUG [Listener at localhost.localdomain/37725] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:58,789 INFO [Listener at localhost.localdomain/37725] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39337,1685530451030' ***** 2023-05-31 10:54:58,789 INFO [Listener at localhost.localdomain/37725] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:54:58,789 INFO [Listener at localhost.localdomain/37725] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,40605,1685530452290' ***** 2023-05-31 10:54:58,789 INFO [RS:0;jenkins-hbase20:39337] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:54:58,789 INFO [Listener at localhost.localdomain/37725] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:54:58,789 INFO [RS:0;jenkins-hbase20:39337] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:54:58,789 INFO [RS:0;jenkins-hbase20:39337] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:54:58,790 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(3303): Received CLOSE for f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:58,789 INFO [RS:1;jenkins-hbase20:40605] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:54:58,789 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:54:58,790 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:58,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f519bde9341dc78b72a39524405e362b, disabling compactions & flushes 2023-05-31 10:54:58,791 DEBUG [RS:0;jenkins-hbase20:39337] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ff56df5 to 127.0.0.1:60520 2023-05-31 10:54:58,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:58,791 DEBUG [RS:0;jenkins-hbase20:39337] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:58,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:58,791 INFO [RS:0;jenkins-hbase20:39337] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:54:58,791 INFO [RS:0;jenkins-hbase20:39337] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:54:58,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. after waiting 0 ms 2023-05-31 10:54:58,791 INFO [RS:0;jenkins-hbase20:39337] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:54:58,791 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:54:58,791 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-31 10:54:58,792 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, f519bde9341dc78b72a39524405e362b=hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b.} 2023-05-31 10:54:58,792 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1504): Waiting on 1588230740, f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:58,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:58,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing f519bde9341dc78b72a39524405e362b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 10:54:58,795 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:54:58,795 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:54:58,795 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:54:58,795 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:54:58,795 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:54:58,795 WARN [RS:0;jenkins-hbase20:39337.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,795 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-05-31 10:54:58,796 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C39337%2C1685530451030:(num 1685530451438) roll requested 2023-05-31 10:54:58,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f519bde9341dc78b72a39524405e362b: 2023-05-31 10:54:58,796 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,796 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/.tmp/info/fce755e7beda4830a61bb047330f3ed3 2023-05-31 10:54:58,796 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:54:58,797 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 10:54:58,797 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,39337,1685530451030: Unrecoverable exception while closing hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,798 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-31 10:54:58,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-31 10:54:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-31 10:54:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-31 10:54:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-31 10:54:58,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1027080192, "init": 524288000, "max": 2051014656, "used": 344529928 }, "NonHeapMemoryUsage": { "committed": 133455872, "init": 2555904, "max": -1, "used": 130723520 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-31 10:54:58,810 WARN [Thread-745] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741866_1048 2023-05-31 10:54:58,810 WARN [Thread-745] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:58,812 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/.tmp/info/fce755e7beda4830a61bb047330f3ed3 as hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/fce755e7beda4830a61bb047330f3ed3 2023-05-31 10:54:58,813 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.MasterRpcServices(609): jenkins-hbase20.apache.org,39337,1685530451030 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,39337,1685530451030: Unrecoverable exception while closing hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,820 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-31 10:54:58,820 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.1685530451438 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.1685530498796 2023-05-31 10:54:58,821 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK], DatanodeInfoWithStorage[127.0.0.1:39033,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK]] 2023-05-31 10:54:58,822 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,822 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.1685530451438 is not closed yet, will try archiving it next time 2023-05-31 10:54:58,822 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.1685530451438; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,822 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta:.meta(num 1685530451600) roll requested 2023-05-31 10:54:58,828 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/fce755e7beda4830a61bb047330f3ed3, entries=8, sequenceid=25, filesize=13.2 K 2023-05-31 10:54:58,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:55352 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741868_1050]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741868_1050 to mirror 127.0.0.1:41755: java.net.ConnectException: Connection refused 2023-05-31 10:54:58,829 WARN [Thread-754] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741868_1050 2023-05-31 10:54:58,829 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 8ee0c48ad9305e3f997566911a7479e9 in 62ms, sequenceid=25, compaction requested=false 2023-05-31 10:54:58,829 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 8ee0c48ad9305e3f997566911a7479e9: 2023-05-31 10:54:58,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:55352 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741868_1050]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55352 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:58,830 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-31 10:54:58,830 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:54:58,830 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/fce755e7beda4830a61bb047330f3ed3 because midkey is the same as first or last row 2023-05-31 10:54:58,830 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:54:58,830 WARN [Thread-754] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:58,830 INFO [RS:1;jenkins-hbase20:40605] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:54:58,830 INFO [RS:1;jenkins-hbase20:40605] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:54:58,830 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(3303): Received CLOSE for 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:58,830 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:58,830 DEBUG [RS:1;jenkins-hbase20:40605] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f63234a to 127.0.0.1:60520 2023-05-31 10:54:58,830 DEBUG [RS:1;jenkins-hbase20:40605] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:58,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8ee0c48ad9305e3f997566911a7479e9, disabling compactions & flushes 2023-05-31 10:54:58,830 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-31 10:54:58,830 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1478): Online Regions={8ee0c48ad9305e3f997566911a7479e9=TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9.} 2023-05-31 10:54:58,831 DEBUG [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1504): Waiting on 8ee0c48ad9305e3f997566911a7479e9 2023-05-31 10:54:58,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:58,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:58,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. after waiting 0 ms 2023-05-31 10:54:58,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:58,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 8ee0c48ad9305e3f997566911a7479e9 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-31 10:54:58,832 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:55820 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741869_1051]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data4/current]'}, localName='127.0.0.1:39033', datanodeUuid='9223e1d3-bae9-4b53-8e62-9b61979c3624', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741869_1051 to mirror 127.0.0.1:32931: java.net.ConnectException: Connection refused 2023-05-31 10:54:58,832 WARN [Thread-754] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741869_1051 2023-05-31 10:54:58,833 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1140454755_17 at /127.0.0.1:55820 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741869_1051]] datanode.DataXceiver(323): 127.0.0.1:39033:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55820 dst: /127.0.0.1:39033 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:58,833 WARN [Thread-754] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:58,842 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-31 10:54:58,842 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta.1685530451600.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta.1685530498822.meta 2023-05-31 10:54:58,843 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39033,DS-cb5ed5e0-1d41-4571-b900-44df063c6309,DISK], DatanodeInfoWithStorage[127.0.0.1:46239,DS-77cc8813-db93-4f5d-847f-c26769fb445d,DISK]] 2023-05-31 10:54:58,843 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta.1685530451600.meta is not closed yet, will try archiving it next time 2023-05-31 10:54:58,843 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,844 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030/jenkins-hbase20.apache.org%2C39337%2C1685530451030.meta.1685530451600.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45025,DS-d619e46a-2a33-4f62-a4db-088fb5c0c43e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:54:58,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/.tmp/info/35ba785fdffc4840885a8eb5c52928e5 2023-05-31 10:54:58,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/.tmp/info/35ba785fdffc4840885a8eb5c52928e5 as hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/35ba785fdffc4840885a8eb5c52928e5 2023-05-31 10:54:58,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/info/35ba785fdffc4840885a8eb5c52928e5, entries=9, sequenceid=37, filesize=14.2 K 2023-05-31 10:54:58,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 8ee0c48ad9305e3f997566911a7479e9 in 32ms, sequenceid=37, compaction requested=true 2023-05-31 10:54:58,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/data/default/TestLogRolling-testLogRollOnDatanodeDeath/8ee0c48ad9305e3f997566911a7479e9/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-05-31 10:54:58,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:58,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8ee0c48ad9305e3f997566911a7479e9: 2023-05-31 10:54:58,871 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685530452399.8ee0c48ad9305e3f997566911a7479e9. 2023-05-31 10:54:58,992 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:54:58,992 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(3303): Received CLOSE for f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:58,992 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:54:58,992 DEBUG [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1504): Waiting on 1588230740, f519bde9341dc78b72a39524405e362b 2023-05-31 10:54:58,992 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:54:58,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f519bde9341dc78b72a39524405e362b, disabling compactions & flushes 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:54:58,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. after waiting 0 ms 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f519bde9341dc78b72a39524405e362b: 2023-05-31 10:54:58,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 10:54:58,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685530451672.f519bde9341dc78b72a39524405e362b. 2023-05-31 10:54:59,031 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,40605,1685530452290; all regions closed. 2023-05-31 10:54:59,032 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:59,048 DEBUG [RS:1;jenkins-hbase20:40605] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/oldWALs 2023-05-31 10:54:59,048 INFO [RS:1;jenkins-hbase20:40605] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C40605%2C1685530452290:(num 1685530498732) 2023-05-31 10:54:59,048 DEBUG [RS:1;jenkins-hbase20:40605] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:59,048 INFO [RS:1;jenkins-hbase20:40605] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:54:59,048 INFO [RS:1;jenkins-hbase20:40605] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-31 10:54:59,048 INFO [RS:1;jenkins-hbase20:40605] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:54:59,048 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:54:59,048 INFO [RS:1;jenkins-hbase20:40605] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:54:59,048 INFO [RS:1;jenkins-hbase20:40605] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:54:59,049 INFO [RS:1;jenkins-hbase20:40605] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:40605 2023-05-31 10:54:59,052 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:59,052 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:59,052 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:59,052 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,40605,1685530452290 2023-05-31 10:54:59,052 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:59,053 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,40605,1685530452290] 2023-05-31 10:54:59,053 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,40605,1685530452290; numProcessing=1 2023-05-31 10:54:59,053 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,40605,1685530452290 already deleted, retry=false 2023-05-31 10:54:59,054 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,40605,1685530452290 expired; onlineServers=1 2023-05-31 10:54:59,188 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:59,188 INFO [RS:1;jenkins-hbase20:40605] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,40605,1685530452290; zookeeper connection closed. 2023-05-31 10:54:59,188 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:40605-0x101a127b43e0005, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:59,190 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6234268d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6234268d 2023-05-31 10:54:59,193 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-31 10:54:59,193 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39337,1685530451030; all regions closed. 2023-05-31 10:54:59,194 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:59,206 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/WALs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:59,212 DEBUG [RS:0;jenkins-hbase20:39337] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:59,212 INFO [RS:0;jenkins-hbase20:39337] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:54:59,213 INFO [RS:0;jenkins-hbase20:39337] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-31 10:54:59,213 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:54:59,213 INFO [RS:0;jenkins-hbase20:39337] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39337 2023-05-31 10:54:59,214 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,39337,1685530451030 2023-05-31 10:54:59,214 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:54:59,215 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,39337,1685530451030] 2023-05-31 10:54:59,215 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,39337,1685530451030; numProcessing=2 2023-05-31 10:54:59,216 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,39337,1685530451030 already deleted, retry=false 2023-05-31 10:54:59,216 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,39337,1685530451030 expired; onlineServers=0 2023-05-31 10:54:59,216 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36473,1685530450986' ***** 2023-05-31 10:54:59,216 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 10:54:59,217 DEBUG [M:0;jenkins-hbase20:36473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68195804, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:54:59,217 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:59,217 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36473,1685530450986; all regions closed. 2023-05-31 10:54:59,217 DEBUG [M:0;jenkins-hbase20:36473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:54:59,217 DEBUG [M:0;jenkins-hbase20:36473] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 10:54:59,217 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 10:54:59,217 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530451218] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530451218,5,FailOnTimeoutGroup] 2023-05-31 10:54:59,217 DEBUG [M:0;jenkins-hbase20:36473] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 10:54:59,217 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530451217] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530451217,5,FailOnTimeoutGroup] 2023-05-31 10:54:59,218 INFO [M:0;jenkins-hbase20:36473] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 10:54:59,218 INFO [M:0;jenkins-hbase20:36473] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 10:54:59,218 INFO [M:0;jenkins-hbase20:36473] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-31 10:54:59,218 DEBUG [M:0;jenkins-hbase20:36473] master.HMaster(1512): Stopping service threads 2023-05-31 10:54:59,218 INFO [M:0;jenkins-hbase20:36473] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 10:54:59,219 ERROR [M:0;jenkins-hbase20:36473] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 10:54:59,219 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 10:54:59,219 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:54:59,219 INFO [M:0;jenkins-hbase20:36473] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 10:54:59,219 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 10:54:59,219 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:54:59,220 DEBUG [M:0;jenkins-hbase20:36473] zookeeper.ZKUtil(398): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 10:54:59,220 WARN [M:0;jenkins-hbase20:36473] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 10:54:59,220 INFO [M:0;jenkins-hbase20:36473] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 10:54:59,220 INFO [M:0;jenkins-hbase20:36473] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 10:54:59,220 DEBUG [M:0;jenkins-hbase20:36473] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:54:59,220 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:59,221 DEBUG [M:0;jenkins-hbase20:36473] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:59,221 DEBUG [M:0;jenkins-hbase20:36473] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:54:59,221 DEBUG [M:0;jenkins-hbase20:36473] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:59,221 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.11 KB heapSize=45.77 KB 2023-05-31 10:54:59,230 WARN [Thread-770] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741872_1054 2023-05-31 10:54:59,231 WARN [Thread-770] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:32931,DS-fc78210b-04b7-4c33-aa7e-a36c3aec0afb,DISK] 2023-05-31 10:54:59,233 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:55376 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741873_1055]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current]'}, localName='127.0.0.1:46239', datanodeUuid='992cc9e3-7d53-4848-aef9-de0635bde546', xmitsInProgress=0}:Exception transfering block BP-541088169-148.251.75.209-1685530450482:blk_1073741873_1055 to mirror 127.0.0.1:41755: java.net.ConnectException: Connection refused 2023-05-31 10:54:59,233 WARN [Thread-770] hdfs.DataStreamer(1658): Abandoning BP-541088169-148.251.75.209-1685530450482:blk_1073741873_1055 2023-05-31 10:54:59,233 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_51907993_17 at /127.0.0.1:55376 [Receiving block BP-541088169-148.251.75.209-1685530450482:blk_1073741873_1055]] datanode.DataXceiver(323): 127.0.0.1:46239:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55376 dst: /127.0.0.1:46239 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:54:59,234 WARN [Thread-770] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41755,DS-6fdb9ca9-c672-4d25-a6a9-ad40166225c7,DISK] 2023-05-31 10:54:59,239 INFO [M:0;jenkins-hbase20:36473] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.11 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2e3a1c9a3c314f22a275563b78a85ea9 2023-05-31 10:54:59,245 DEBUG [M:0;jenkins-hbase20:36473] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2e3a1c9a3c314f22a275563b78a85ea9 as hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2e3a1c9a3c314f22a275563b78a85ea9 2023-05-31 10:54:59,250 INFO [M:0;jenkins-hbase20:36473] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40701/user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2e3a1c9a3c314f22a275563b78a85ea9, entries=11, sequenceid=92, filesize=7.0 K 2023-05-31 10:54:59,251 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegion(2948): Finished flush of dataSize ~38.11 KB/39023, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=92, compaction requested=false 2023-05-31 10:54:59,252 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:54:59,252 DEBUG [M:0;jenkins-hbase20:36473] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:54:59,252 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c0f2825-acdb-8026-9212-1e5f7cd3a686/MasterData/WALs/jenkins-hbase20.apache.org,36473,1685530450986 2023-05-31 10:54:59,256 INFO [M:0;jenkins-hbase20:36473] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 10:54:59,256 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:54:59,256 INFO [M:0;jenkins-hbase20:36473] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36473 2023-05-31 10:54:59,258 DEBUG [M:0;jenkins-hbase20:36473] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,36473,1685530450986 already deleted, retry=false 2023-05-31 10:54:59,312 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:54:59,316 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:59,316 INFO [RS:0;jenkins-hbase20:39337] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39337,1685530451030; zookeeper connection closed. 2023-05-31 10:54:59,316 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): regionserver:39337-0x101a127b43e0001, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:59,317 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f7a5e08] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1f7a5e08 2023-05-31 10:54:59,318 INFO [Listener at localhost.localdomain/37725] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-31 10:54:59,416 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:59,416 DEBUG [Listener at localhost.localdomain/39713-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101a127b43e0000, quorum=127.0.0.1:60520, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:54:59,416 INFO [M:0;jenkins-hbase20:36473] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36473,1685530450986; zookeeper connection closed. 2023-05-31 10:54:59,418 WARN [Listener at localhost.localdomain/37725] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:59,422 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-541088169-148.251.75.209-1685530450482 (Datanode Uuid 9223e1d3-bae9-4b53-8e62-9b61979c3624) service to localhost.localdomain/127.0.0.1:40701 2023-05-31 10:54:59,425 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data3/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:59,426 INFO [Listener at localhost.localdomain/37725] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:59,426 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data4/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:59,538 WARN [Listener at localhost.localdomain/37725] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:54:59,542 INFO [Listener at localhost.localdomain/37725] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:54:59,650 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:54:59,650 WARN [BP-541088169-148.251.75.209-1685530450482 heartbeating to localhost.localdomain/127.0.0.1:40701] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-541088169-148.251.75.209-1685530450482 (Datanode Uuid 992cc9e3-7d53-4848-aef9-de0635bde546) service to localhost.localdomain/127.0.0.1:40701 2023-05-31 10:54:59,651 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data7/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:59,652 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/cluster_43f217a9-8bc8-0746-05d0-7bba58ce8c4d/dfs/data/data8/current/BP-541088169-148.251.75.209-1685530450482] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:54:59,665 INFO [Listener at localhost.localdomain/37725] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 10:54:59,781 INFO [Listener at localhost.localdomain/37725] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 10:54:59,812 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 10:54:59,822 INFO [Listener at localhost.localdomain/37725] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 51) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: regionserver/jenkins-hbase20:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:40701 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37725 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:40701 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40701 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:40701 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40701 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40701 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:40701 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=472 (was 442) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=142 (was 164), ProcessCount=168 (was 168), AvailableMemoryMB=8316 (was 8982) 2023-05-31 10:54:59,830 INFO [Listener at localhost.localdomain/37725] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=472, MaxFileDescriptor=60000, SystemLoadAverage=142, ProcessCount=168, AvailableMemoryMB=8316 2023-05-31 10:54:59,830 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 10:54:59,830 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/hadoop.log.dir so I do NOT create it in target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc 2023-05-31 10:54:59,830 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/76149b96-4216-6c9f-e648-515856c74bd0/hadoop.tmp.dir so I do NOT create it in target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc 2023-05-31 10:54:59,830 INFO [Listener at localhost.localdomain/37725] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b, deleteOnExit=true 2023-05-31 10:54:59,830 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 10:54:59,831 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/test.cache.data in system properties and HBase conf 2023-05-31 10:54:59,831 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 10:54:59,831 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/hadoop.log.dir in system properties and HBase conf 2023-05-31 10:54:59,831 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 10:54:59,831 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 10:54:59,831 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 10:54:59,831 DEBUG [Listener at localhost.localdomain/37725] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 10:54:59,832 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/nfs.dump.dir in system properties and HBase conf 2023-05-31 10:54:59,833 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir in system properties and HBase conf 2023-05-31 10:54:59,833 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:54:59,833 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 10:54:59,833 INFO [Listener at localhost.localdomain/37725] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 10:54:59,834 WARN [Listener at localhost.localdomain/37725] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:54:59,836 WARN [Listener at localhost.localdomain/37725] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:54:59,836 WARN [Listener at localhost.localdomain/37725] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:54:59,863 WARN [Listener at localhost.localdomain/37725] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:59,865 INFO [Listener at localhost.localdomain/37725] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:59,869 INFO [Listener at localhost.localdomain/37725] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_localdomain_43115_hdfs____c6zifk/webapp 2023-05-31 10:54:59,940 INFO [Listener at localhost.localdomain/37725] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43115 2023-05-31 10:54:59,942 WARN [Listener at localhost.localdomain/37725] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:54:59,943 WARN [Listener at localhost.localdomain/37725] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:54:59,943 WARN [Listener at localhost.localdomain/37725] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:54:59,972 WARN [Listener at localhost.localdomain/41421] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:54:59,987 WARN [Listener at localhost.localdomain/41421] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:54:59,989 WARN [Listener at localhost.localdomain/41421] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:54:59,990 INFO [Listener at localhost.localdomain/41421] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:54:59,995 INFO [Listener at localhost.localdomain/41421] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_35493_datanode____thtqqb/webapp 2023-05-31 10:55:00,070 INFO [Listener at localhost.localdomain/41421] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35493 2023-05-31 10:55:00,075 WARN [Listener at localhost.localdomain/35347] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:00,088 WARN [Listener at localhost.localdomain/35347] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:00,091 WARN [Listener at localhost.localdomain/35347] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:00,092 INFO [Listener at localhost.localdomain/35347] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:00,098 INFO [Listener at localhost.localdomain/35347] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_39351_datanode____dxse34/webapp 2023-05-31 10:55:00,146 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a7fd30a2aebaa2: Processing first storage report for DS-cfaada67-9ff4-4681-b274-de87f0d6ea83 from datanode c1bec11b-5927-48ae-b61a-5a4b0da74803 2023-05-31 10:55:00,146 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a7fd30a2aebaa2: from storage DS-cfaada67-9ff4-4681-b274-de87f0d6ea83 node DatanodeRegistration(127.0.0.1:44803, datanodeUuid=c1bec11b-5927-48ae-b61a-5a4b0da74803, infoPort=36123, infoSecurePort=0, ipcPort=35347, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:00,146 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2a7fd30a2aebaa2: Processing first storage report for DS-d4dbadee-0360-4aa3-8fdb-ddbd2ad293da from datanode c1bec11b-5927-48ae-b61a-5a4b0da74803 2023-05-31 10:55:00,146 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2a7fd30a2aebaa2: from storage DS-d4dbadee-0360-4aa3-8fdb-ddbd2ad293da node DatanodeRegistration(127.0.0.1:44803, datanodeUuid=c1bec11b-5927-48ae-b61a-5a4b0da74803, infoPort=36123, infoSecurePort=0, ipcPort=35347, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:00,195 INFO [Listener at localhost.localdomain/35347] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39351 2023-05-31 10:55:00,201 WARN [Listener at localhost.localdomain/42965] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:00,261 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7897456416fb8e2c: Processing first storage report for DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376 from datanode aeaeaae3-0177-4915-b49e-3a42a77f0c12 2023-05-31 10:55:00,261 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7897456416fb8e2c: from storage DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376 node DatanodeRegistration(127.0.0.1:40175, datanodeUuid=aeaeaae3-0177-4915-b49e-3a42a77f0c12, infoPort=33037, infoSecurePort=0, ipcPort=42965, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:00,261 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7897456416fb8e2c: Processing first storage report for DS-c1b0d34a-36b5-4e12-af56-62e107b5bd71 from datanode aeaeaae3-0177-4915-b49e-3a42a77f0c12 2023-05-31 10:55:00,261 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7897456416fb8e2c: from storage DS-c1b0d34a-36b5-4e12-af56-62e107b5bd71 node DatanodeRegistration(127.0.0.1:40175, datanodeUuid=aeaeaae3-0177-4915-b49e-3a42a77f0c12, infoPort=33037, infoSecurePort=0, ipcPort=42965, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:00,308 DEBUG [Listener at localhost.localdomain/42965] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc 2023-05-31 10:55:00,311 INFO [Listener at localhost.localdomain/42965] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/zookeeper_0, clientPort=60515, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 10:55:00,314 INFO [Listener at localhost.localdomain/42965] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60515 2023-05-31 10:55:00,314 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,316 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,333 INFO [Listener at localhost.localdomain/42965] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf with version=8 2023-05-31 10:55:00,333 INFO [Listener at localhost.localdomain/42965] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase-staging 2023-05-31 10:55:00,335 INFO [Listener at localhost.localdomain/42965] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:55:00,335 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:00,335 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:00,335 INFO [Listener at localhost.localdomain/42965] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:55:00,336 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:00,336 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:55:00,336 INFO [Listener at localhost.localdomain/42965] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:55:00,337 INFO [Listener at localhost.localdomain/42965] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45649 2023-05-31 10:55:00,337 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,338 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,339 INFO [Listener at localhost.localdomain/42965] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45649 connecting to ZooKeeper ensemble=127.0.0.1:60515 2023-05-31 10:55:00,344 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:456490x0, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:55:00,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45649-0x101a12875020000 connected 2023-05-31 10:55:00,358 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:55:00,359 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:55:00,360 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:55:00,360 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45649 2023-05-31 10:55:00,360 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45649 2023-05-31 10:55:00,361 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45649 2023-05-31 10:55:00,361 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45649 2023-05-31 10:55:00,361 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45649 2023-05-31 10:55:00,362 INFO [Listener at localhost.localdomain/42965] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf, hbase.cluster.distributed=false 2023-05-31 10:55:00,375 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:55:00,377 INFO [Listener at localhost.localdomain/42965] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:55:00,377 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:00,378 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:00,378 INFO [Listener at localhost.localdomain/42965] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:55:00,378 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:00,378 INFO [Listener at localhost.localdomain/42965] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:55:00,378 INFO [Listener at localhost.localdomain/42965] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:55:00,379 INFO [Listener at localhost.localdomain/42965] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44663 2023-05-31 10:55:00,380 INFO [Listener at localhost.localdomain/42965] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:55:00,380 DEBUG [Listener at localhost.localdomain/42965] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:55:00,381 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,382 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,383 INFO [Listener at localhost.localdomain/42965] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44663 connecting to ZooKeeper ensemble=127.0.0.1:60515 2023-05-31 10:55:00,385 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:446630x0, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:55:00,386 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44663-0x101a12875020001 connected 2023-05-31 10:55:00,386 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ZKUtil(164): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:55:00,387 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ZKUtil(164): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:55:00,387 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ZKUtil(164): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:55:00,388 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44663 2023-05-31 10:55:00,388 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44663 2023-05-31 10:55:00,388 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44663 2023-05-31 10:55:00,389 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44663 2023-05-31 10:55:00,389 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44663 2023-05-31 10:55:00,390 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,391 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:55:00,391 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,392 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:55:00,392 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:55:00,392 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:55:00,394 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:55:00,394 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,45649,1685530500335 from backup master directory 2023-05-31 10:55:00,395 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,395 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:55:00,395 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:55:00,395 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,409 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/hbase.id with ID: b85d95e5-bef8-41a4-83e4-3cffa9cf5594 2023-05-31 10:55:00,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:00,423 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,432 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x350a50e7 to 127.0.0.1:60515 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:55:00,441 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d1ad79, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:55:00,441 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:00,441 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 10:55:00,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:55:00,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store-tmp 2023-05-31 10:55:00,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:00,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:55:00,452 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:00,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:00,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:55:00,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:00,452 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:00,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:55:00,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,455 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45649%2C1685530500335, suffix=, logDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335, archiveDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/oldWALs, maxLogs=10 2023-05-31 10:55:00,465 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335/jenkins-hbase20.apache.org%2C45649%2C1685530500335.1685530500456 2023-05-31 10:55:00,465 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK], DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] 2023-05-31 10:55:00,466 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:00,466 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:00,466 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:00,466 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:00,471 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:00,473 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 10:55:00,473 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 10:55:00,474 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:00,475 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:00,478 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:00,480 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:00,480 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=865637, jitterRate=0.10071517527103424}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:55:00,480 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:55:00,481 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 10:55:00,482 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 10:55:00,482 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 10:55:00,482 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 10:55:00,483 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 10:55:00,483 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 10:55:00,483 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 10:55:00,485 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 10:55:00,486 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 10:55:00,495 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 10:55:00,496 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 10:55:00,497 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 10:55:00,497 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 10:55:00,497 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 10:55:00,499 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,499 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 10:55:00,500 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 10:55:00,500 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 10:55:00,501 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:55:00,501 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:55:00,501 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,502 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,45649,1685530500335, sessionid=0x101a12875020000, setting cluster-up flag (Was=false) 2023-05-31 10:55:00,505 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,508 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 10:55:00,510 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,513 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,516 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 10:55:00,516 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:00,517 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.hbase-snapshot/.tmp 2023-05-31 10:55:00,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 10:55:00,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:00,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:00,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:00,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:00,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-31 10:55:00,522 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,522 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:55:00,522 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685530530531 2023-05-31 10:55:00,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 10:55:00,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 10:55:00,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 10:55:00,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 10:55:00,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 10:55:00,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 10:55:00,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,536 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:55:00,539 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 10:55:00,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 10:55:00,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 10:55:00,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 10:55:00,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 10:55:00,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 10:55:00,540 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530500540,5,FailOnTimeoutGroup] 2023-05-31 10:55:00,540 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530500540,5,FailOnTimeoutGroup] 2023-05-31 10:55:00,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 10:55:00,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,541 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:00,560 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:00,560 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:00,561 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf 2023-05-31 10:55:00,581 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:00,583 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:55:00,585 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/info 2023-05-31 10:55:00,585 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:55:00,586 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,586 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:55:00,587 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:55:00,588 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:55:00,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:55:00,590 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/table 2023-05-31 10:55:00,591 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:55:00,591 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(951): ClusterId : b85d95e5-bef8-41a4-83e4-3cffa9cf5594 2023-05-31 10:55:00,592 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:55:00,592 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,593 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740 2023-05-31 10:55:00,594 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:55:00,594 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740 2023-05-31 10:55:00,594 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:55:00,596 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:55:00,597 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:55:00,597 DEBUG [RS:0;jenkins-hbase20:44663] zookeeper.ReadOnlyZKClient(139): Connect 0x6c859253 to 127.0.0.1:60515 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:55:00,599 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:55:00,604 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:00,605 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=737277, jitterRate=-0.06250426173210144}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:55:00,605 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:55:00,605 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:55:00,605 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:55:00,605 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:55:00,605 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:55:00,605 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:55:00,606 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:55:00,606 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:55:00,607 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:55:00,607 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 10:55:00,607 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 10:55:00,607 DEBUG [RS:0;jenkins-hbase20:44663] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68ba1b1b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:55:00,608 DEBUG [RS:0;jenkins-hbase20:44663] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db89868, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:55:00,609 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 10:55:00,610 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 10:55:00,616 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:44663 2023-05-31 10:55:00,616 INFO [RS:0;jenkins-hbase20:44663] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:55:00,616 INFO [RS:0;jenkins-hbase20:44663] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:55:00,616 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:55:00,617 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,45649,1685530500335 with isa=jenkins-hbase20.apache.org/148.251.75.209:44663, startcode=1685530500377 2023-05-31 10:55:00,617 DEBUG [RS:0;jenkins-hbase20:44663] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:55:00,620 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43125, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:55:00,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,622 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf 2023-05-31 10:55:00,622 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41421 2023-05-31 10:55:00,622 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:55:00,623 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:55:00,624 DEBUG [RS:0;jenkins-hbase20:44663] zookeeper.ZKUtil(162): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,624 WARN [RS:0;jenkins-hbase20:44663] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:55:00,624 INFO [RS:0;jenkins-hbase20:44663] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:55:00,624 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44663,1685530500377] 2023-05-31 10:55:00,624 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,631 DEBUG [RS:0;jenkins-hbase20:44663] zookeeper.ZKUtil(162): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,632 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:55:00,632 INFO [RS:0;jenkins-hbase20:44663] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:55:00,634 INFO [RS:0;jenkins-hbase20:44663] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:55:00,634 INFO [RS:0;jenkins-hbase20:44663] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:55:00,634 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,635 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:55:00,637 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,637 DEBUG [RS:0;jenkins-hbase20:44663] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:00,638 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,638 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,638 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,648 INFO [RS:0;jenkins-hbase20:44663] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:55:00,648 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44663,1685530500377-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:00,657 INFO [RS:0;jenkins-hbase20:44663] regionserver.Replication(203): jenkins-hbase20.apache.org,44663,1685530500377 started 2023-05-31 10:55:00,657 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44663,1685530500377, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44663, sessionid=0x101a12875020001 2023-05-31 10:55:00,657 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:55:00,658 DEBUG [RS:0;jenkins-hbase20:44663] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,658 DEBUG [RS:0;jenkins-hbase20:44663] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44663,1685530500377' 2023-05-31 10:55:00,658 DEBUG [RS:0;jenkins-hbase20:44663] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:55:00,658 DEBUG [RS:0;jenkins-hbase20:44663] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44663,1685530500377' 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:55:00,659 DEBUG [RS:0;jenkins-hbase20:44663] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:55:00,659 INFO [RS:0;jenkins-hbase20:44663] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:55:00,660 INFO [RS:0;jenkins-hbase20:44663] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:55:00,760 DEBUG [jenkins-hbase20:45649] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 10:55:00,761 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44663,1685530500377, state=OPENING 2023-05-31 10:55:00,762 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 10:55:00,762 INFO [RS:0;jenkins-hbase20:44663] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44663%2C1685530500377, suffix=, logDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377, archiveDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/oldWALs, maxLogs=32 2023-05-31 10:55:00,763 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:00,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44663,1685530500377}] 2023-05-31 10:55:00,763 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:55:00,776 INFO [RS:0;jenkins-hbase20:44663] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 2023-05-31 10:55:00,776 DEBUG [RS:0;jenkins-hbase20:44663] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK], DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] 2023-05-31 10:55:00,919 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:00,919 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:55:00,922 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52640, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:55:00,930 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 10:55:00,930 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:55:00,933 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44663%2C1685530500377.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377, archiveDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/oldWALs, maxLogs=32 2023-05-31 10:55:00,953 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.meta.1685530500935.meta 2023-05-31 10:55:00,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK], DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] 2023-05-31 10:55:00,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:00,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 10:55:00,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 10:55:00,954 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 10:55:00,955 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 10:55:00,955 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:00,955 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 10:55:00,955 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 10:55:00,958 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:55:00,959 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/info 2023-05-31 10:55:00,959 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/info 2023-05-31 10:55:00,960 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:55:00,961 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,962 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:55:00,963 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:55:00,963 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:55:00,964 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:55:00,964 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,964 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:55:00,966 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/table 2023-05-31 10:55:00,966 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740/table 2023-05-31 10:55:00,966 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:55:00,967 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:00,968 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740 2023-05-31 10:55:00,969 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/meta/1588230740 2023-05-31 10:55:00,972 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:55:00,973 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:55:00,974 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=817344, jitterRate=0.03930731117725372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:55:00,975 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:55:00,976 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685530500919 2023-05-31 10:55:00,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 10:55:00,981 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 10:55:00,981 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44663,1685530500377, state=OPEN 2023-05-31 10:55:00,983 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 10:55:00,983 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:55:00,985 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 10:55:00,985 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44663,1685530500377 in 220 msec 2023-05-31 10:55:00,987 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 10:55:00,988 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 378 msec 2023-05-31 10:55:00,990 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 469 msec 2023-05-31 10:55:00,990 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685530500990, completionTime=-1 2023-05-31 10:55:00,990 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 10:55:00,990 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 10:55:00,992 DEBUG [hconnection-0x407b748b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:55:00,994 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52650, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:55:00,996 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 10:55:00,996 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685530560996 2023-05-31 10:55:00,996 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685530620996 2023-05-31 10:55:00,996 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-31 10:55:01,003 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45649,1685530500335-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:01,003 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45649,1685530500335-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:01,003 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45649,1685530500335-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:01,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:45649, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:01,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:01,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 10:55:01,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:01,006 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 10:55:01,006 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 10:55:01,008 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:55:01,009 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:55:01,011 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/hbase/namespace/52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,012 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/hbase/namespace/52dd25eac04998f3629384e93420de10 empty. 2023-05-31 10:55:01,012 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/hbase/namespace/52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,012 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 10:55:01,024 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:01,026 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 52dd25eac04998f3629384e93420de10, NAME => 'hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp 2023-05-31 10:55:01,037 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:01,038 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 52dd25eac04998f3629384e93420de10, disabling compactions & flushes 2023-05-31 10:55:01,038 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,038 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,038 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. after waiting 0 ms 2023-05-31 10:55:01,038 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,038 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,038 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 52dd25eac04998f3629384e93420de10: 2023-05-31 10:55:01,041 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:55:01,042 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530501042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530501042"}]},"ts":"1685530501042"} 2023-05-31 10:55:01,044 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:55:01,046 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:55:01,046 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530501046"}]},"ts":"1685530501046"} 2023-05-31 10:55:01,048 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 10:55:01,052 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=52dd25eac04998f3629384e93420de10, ASSIGN}] 2023-05-31 10:55:01,054 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=52dd25eac04998f3629384e93420de10, ASSIGN 2023-05-31 10:55:01,055 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=52dd25eac04998f3629384e93420de10, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44663,1685530500377; forceNewPlan=false, retain=false 2023-05-31 10:55:01,206 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=52dd25eac04998f3629384e93420de10, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:01,206 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530501206"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530501206"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530501206"}]},"ts":"1685530501206"} 2023-05-31 10:55:01,209 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 52dd25eac04998f3629384e93420de10, server=jenkins-hbase20.apache.org,44663,1685530500377}] 2023-05-31 10:55:01,366 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 52dd25eac04998f3629384e93420de10, NAME => 'hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:01,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:01,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,367 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,372 INFO [StoreOpener-52dd25eac04998f3629384e93420de10-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,374 DEBUG [StoreOpener-52dd25eac04998f3629384e93420de10-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/namespace/52dd25eac04998f3629384e93420de10/info 2023-05-31 10:55:01,374 DEBUG [StoreOpener-52dd25eac04998f3629384e93420de10-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/namespace/52dd25eac04998f3629384e93420de10/info 2023-05-31 10:55:01,375 INFO [StoreOpener-52dd25eac04998f3629384e93420de10-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 52dd25eac04998f3629384e93420de10 columnFamilyName info 2023-05-31 10:55:01,375 INFO [StoreOpener-52dd25eac04998f3629384e93420de10-1] regionserver.HStore(310): Store=52dd25eac04998f3629384e93420de10/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:01,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/namespace/52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,377 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/namespace/52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,379 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:01,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/hbase/namespace/52dd25eac04998f3629384e93420de10/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:01,382 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 52dd25eac04998f3629384e93420de10; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=797318, jitterRate=0.013842806220054626}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:55:01,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 52dd25eac04998f3629384e93420de10: 2023-05-31 10:55:01,385 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10., pid=6, masterSystemTime=1685530501361 2023-05-31 10:55:01,388 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,388 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:01,389 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=52dd25eac04998f3629384e93420de10, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:01,389 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530501389"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530501389"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530501389"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530501389"}]},"ts":"1685530501389"} 2023-05-31 10:55:01,394 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 10:55:01,394 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 52dd25eac04998f3629384e93420de10, server=jenkins-hbase20.apache.org,44663,1685530500377 in 183 msec 2023-05-31 10:55:01,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 10:55:01,397 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=52dd25eac04998f3629384e93420de10, ASSIGN in 342 msec 2023-05-31 10:55:01,397 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:55:01,398 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530501398"}]},"ts":"1685530501398"} 2023-05-31 10:55:01,400 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 10:55:01,402 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:55:01,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 398 msec 2023-05-31 10:55:01,407 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 10:55:01,422 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:55:01,422 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:01,430 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 10:55:01,441 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:55:01,445 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-05-31 10:55:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 10:55:01,462 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:55:01,466 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-31 10:55:01,478 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 10:55:01,479 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 10:55:01,479 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.084sec 2023-05-31 10:55:01,479 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 10:55:01,480 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 10:55:01,481 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 10:55:01,481 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45649,1685530500335-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 10:55:01,481 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45649,1685530500335-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 10:55:01,483 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 10:55:01,491 DEBUG [Listener at localhost.localdomain/42965] zookeeper.ReadOnlyZKClient(139): Connect 0x02099140 to 127.0.0.1:60515 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:55:01,496 DEBUG [Listener at localhost.localdomain/42965] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@da1bce2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:55:01,497 DEBUG [hconnection-0x33f62b03-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:55:01,500 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52652, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:55:01,501 INFO [Listener at localhost.localdomain/42965] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:01,501 INFO [Listener at localhost.localdomain/42965] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:01,507 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 10:55:01,508 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:01,508 INFO [Listener at localhost.localdomain/42965] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 10:55:01,509 INFO [Listener at localhost.localdomain/42965] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-31 10:55:01,509 INFO [Listener at localhost.localdomain/42965] wal.TestLogRolling(432): Replication=2 2023-05-31 10:55:01,511 DEBUG [Listener at localhost.localdomain/42965] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 10:55:01,519 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59106, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 10:55:01,521 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 10:55:01,522 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 10:55:01,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:01,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-31 10:55:01,531 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:55:01,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-31 10:55:01,532 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:55:01,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:55:01,538 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,538 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee empty. 2023-05-31 10:55:01,539 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,539 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-31 10:55:01,552 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:01,553 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 033f5aefd7b68aaa0d16cfdbc1d8c2ee, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/.tmp 2023-05-31 10:55:01,561 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:01,561 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 033f5aefd7b68aaa0d16cfdbc1d8c2ee, disabling compactions & flushes 2023-05-31 10:55:01,562 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,562 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,562 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. after waiting 0 ms 2023-05-31 10:55:01,562 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,562 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,562 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 033f5aefd7b68aaa0d16cfdbc1d8c2ee: 2023-05-31 10:55:01,564 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:55:01,565 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685530501565"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530501565"}]},"ts":"1685530501565"} 2023-05-31 10:55:01,567 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:55:01,568 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:55:01,568 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530501568"}]},"ts":"1685530501568"} 2023-05-31 10:55:01,570 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-31 10:55:01,573 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=033f5aefd7b68aaa0d16cfdbc1d8c2ee, ASSIGN}] 2023-05-31 10:55:01,574 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=033f5aefd7b68aaa0d16cfdbc1d8c2ee, ASSIGN 2023-05-31 10:55:01,575 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=033f5aefd7b68aaa0d16cfdbc1d8c2ee, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44663,1685530500377; forceNewPlan=false, retain=false 2023-05-31 10:55:01,727 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=033f5aefd7b68aaa0d16cfdbc1d8c2ee, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:01,727 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685530501726"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530501726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530501726"}]},"ts":"1685530501726"} 2023-05-31 10:55:01,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 033f5aefd7b68aaa0d16cfdbc1d8c2ee, server=jenkins-hbase20.apache.org,44663,1685530500377}] 2023-05-31 10:55:01,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 033f5aefd7b68aaa0d16cfdbc1d8c2ee, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:01,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:01,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,890 INFO [StoreOpener-033f5aefd7b68aaa0d16cfdbc1d8c2ee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,891 DEBUG [StoreOpener-033f5aefd7b68aaa0d16cfdbc1d8c2ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee/info 2023-05-31 10:55:01,891 DEBUG [StoreOpener-033f5aefd7b68aaa0d16cfdbc1d8c2ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee/info 2023-05-31 10:55:01,892 INFO [StoreOpener-033f5aefd7b68aaa0d16cfdbc1d8c2ee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 033f5aefd7b68aaa0d16cfdbc1d8c2ee columnFamilyName info 2023-05-31 10:55:01,893 INFO [StoreOpener-033f5aefd7b68aaa0d16cfdbc1d8c2ee-1] regionserver.HStore(310): Store=033f5aefd7b68aaa0d16cfdbc1d8c2ee/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:01,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:01,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/data/default/TestLogRolling-testLogRollOnPipelineRestart/033f5aefd7b68aaa0d16cfdbc1d8c2ee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:01,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 033f5aefd7b68aaa0d16cfdbc1d8c2ee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=719186, jitterRate=-0.08550843596458435}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:55:01,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 033f5aefd7b68aaa0d16cfdbc1d8c2ee: 2023-05-31 10:55:01,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee., pid=11, masterSystemTime=1685530501883 2023-05-31 10:55:01,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:01,904 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=033f5aefd7b68aaa0d16cfdbc1d8c2ee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:01,904 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685530501904"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530501904"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530501904"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530501904"}]},"ts":"1685530501904"} 2023-05-31 10:55:01,908 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 10:55:01,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 033f5aefd7b68aaa0d16cfdbc1d8c2ee, server=jenkins-hbase20.apache.org,44663,1685530500377 in 176 msec 2023-05-31 10:55:01,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 10:55:01,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=033f5aefd7b68aaa0d16cfdbc1d8c2ee, ASSIGN in 336 msec 2023-05-31 10:55:01,912 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:55:01,912 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530501912"}]},"ts":"1685530501912"} 2023-05-31 10:55:01,914 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-31 10:55:01,916 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:55:01,918 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 394 msec 2023-05-31 10:55:04,412 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 10:55:06,632 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-31 10:55:11,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:55:11,535 INFO [Listener at localhost.localdomain/42965] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-31 10:55:11,537 DEBUG [Listener at localhost.localdomain/42965] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-31 10:55:11,537 DEBUG [Listener at localhost.localdomain/42965] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:13,544 INFO [Listener at localhost.localdomain/42965] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 2023-05-31 10:55:13,544 WARN [Listener at localhost.localdomain/42965] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:55:13,546 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:13,547 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 10:55:13,548 WARN [DataStreamer for file /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 block BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK], DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]) is bad. 2023-05-31 10:55:13,548 WARN [DataStreamer for file /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335/jenkins-hbase20.apache.org%2C45649%2C1685530500335.1685530500456 block BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK], DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]) is bad. 2023-05-31 10:55:13,548 WARN [PacketResponder: BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40175]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,554 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1831776409_17 at /127.0.0.1:42864 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42864 dst: /127.0.0.1:44803 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,571 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 10:55:13,571 WARN [DataStreamer for file /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.meta.1685530500935.meta block BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK], DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40175,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]) is bad. 2023-05-31 10:55:13,571 WARN [PacketResponder: BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40175]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,574 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:35358 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35358 dst: /127.0.0.1:44803 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,575 INFO [Listener at localhost.localdomain/42965] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:55:13,577 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:42888 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42888 dst: /127.0.0.1:44803 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44803 remote=/127.0.0.1:42888]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,584 WARN [PacketResponder: BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44803]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,586 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:37698 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40175:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37698 dst: /127.0.0.1:40175 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,679 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1831776409_17 at /127.0.0.1:37678 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40175:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37678 dst: /127.0.0.1:40175 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,679 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:59954 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40175:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59954 dst: /127.0.0.1:40175 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,681 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:55:13,681 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-205262694-148.251.75.209-1685530499838 (Datanode Uuid aeaeaae3-0177-4915-b49e-3a42a77f0c12) service to localhost.localdomain/127.0.0.1:41421 2023-05-31 10:55:13,682 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data3/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:13,682 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data4/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:13,691 WARN [Listener at localhost.localdomain/42965] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:13,695 WARN [Listener at localhost.localdomain/42965] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:13,697 INFO [Listener at localhost.localdomain/42965] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:13,705 INFO [Listener at localhost.localdomain/42965] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_34353_datanode____.okwcuh/webapp 2023-05-31 10:55:13,790 INFO [Listener at localhost.localdomain/42965] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34353 2023-05-31 10:55:13,801 WARN [Listener at localhost.localdomain/34905] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:13,806 WARN [Listener at localhost.localdomain/34905] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:55:13,806 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:13,806 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:13,806 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:13,816 INFO [Listener at localhost.localdomain/34905] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:55:13,819 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:55:13,819 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-205262694-148.251.75.209-1685530499838 (Datanode Uuid c1bec11b-5927-48ae-b61a-5a4b0da74803) service to localhost.localdomain/127.0.0.1:41421 2023-05-31 10:55:13,819 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:13,820 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data2/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:13,820 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1831776409_17 at /127.0.0.1:36334 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36334 dst: /127.0.0.1:44803 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:36342 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36342 dst: /127.0.0.1:44803 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:36330 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44803:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36330 dst: /127.0.0.1:44803 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:13,830 WARN [Listener at localhost.localdomain/34905] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:13,832 WARN [Listener at localhost.localdomain/34905] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:13,834 INFO [Listener at localhost.localdomain/34905] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:13,841 INFO [Listener at localhost.localdomain/34905] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_38501_datanode____.s0doo8/webapp 2023-05-31 10:55:13,869 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd64ae81bd179d40a: Processing first storage report for DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376 from datanode aeaeaae3-0177-4915-b49e-3a42a77f0c12 2023-05-31 10:55:13,870 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd64ae81bd179d40a: from storage DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376 node DatanodeRegistration(127.0.0.1:44001, datanodeUuid=aeaeaae3-0177-4915-b49e-3a42a77f0c12, infoPort=44989, infoSecurePort=0, ipcPort=34905, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:13,870 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd64ae81bd179d40a: Processing first storage report for DS-c1b0d34a-36b5-4e12-af56-62e107b5bd71 from datanode aeaeaae3-0177-4915-b49e-3a42a77f0c12 2023-05-31 10:55:13,870 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd64ae81bd179d40a: from storage DS-c1b0d34a-36b5-4e12-af56-62e107b5bd71 node DatanodeRegistration(127.0.0.1:44001, datanodeUuid=aeaeaae3-0177-4915-b49e-3a42a77f0c12, infoPort=44989, infoSecurePort=0, ipcPort=34905, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:13,937 INFO [Listener at localhost.localdomain/34905] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38501 2023-05-31 10:55:13,945 WARN [Listener at localhost.localdomain/34801] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:14,048 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe9ec5ae2e807983: Processing first storage report for DS-cfaada67-9ff4-4681-b274-de87f0d6ea83 from datanode c1bec11b-5927-48ae-b61a-5a4b0da74803 2023-05-31 10:55:14,048 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe9ec5ae2e807983: from storage DS-cfaada67-9ff4-4681-b274-de87f0d6ea83 node DatanodeRegistration(127.0.0.1:40411, datanodeUuid=c1bec11b-5927-48ae-b61a-5a4b0da74803, infoPort=34403, infoSecurePort=0, ipcPort=34801, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:14,048 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe9ec5ae2e807983: Processing first storage report for DS-d4dbadee-0360-4aa3-8fdb-ddbd2ad293da from datanode c1bec11b-5927-48ae-b61a-5a4b0da74803 2023-05-31 10:55:14,048 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe9ec5ae2e807983: from storage DS-d4dbadee-0360-4aa3-8fdb-ddbd2ad293da node DatanodeRegistration(127.0.0.1:40411, datanodeUuid=c1bec11b-5927-48ae-b61a-5a4b0da74803, infoPort=34403, infoSecurePort=0, ipcPort=34801, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 10:55:14,950 INFO [Listener at localhost.localdomain/34801] wal.TestLogRolling(481): Data Nodes restarted 2023-05-31 10:55:14,951 INFO [Listener at localhost.localdomain/34801] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-31 10:55:14,953 WARN [RS:0;jenkins-hbase20:44663.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:14,954 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44663%2C1685530500377:(num 1685530500763) roll requested 2023-05-31 10:55:14,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44663] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:14,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44663] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:52652 deadline: 1685530524952, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-31 10:55:14,986 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 newFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 2023-05-31 10:55:14,987 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-31 10:55:14,987 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 2023-05-31 10:55:14,987 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40411,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK], DatanodeInfoWithStorage[127.0.0.1:44001,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] 2023-05-31 10:55:14,987 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 is not closed yet, will try archiving it next time 2023-05-31 10:55:14,987 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:14,987 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:27,051 INFO [Listener at localhost.localdomain/34801] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-31 10:55:29,054 WARN [Listener at localhost.localdomain/34801] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:55:29,056 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:29,056 WARN [DataStreamer for file /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 block BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40411,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK], DatanodeInfoWithStorage[127.0.0.1:44001,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40411,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]) is bad. 2023-05-31 10:55:29,061 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:47220 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:44001:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47220 dst: /127.0.0.1:44001 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44001 remote=/127.0.0.1:47220]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:29,062 INFO [Listener at localhost.localdomain/34801] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:55:29,061 WARN [PacketResponder: BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44001]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:29,063 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:37340 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40411:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37340 dst: /127.0.0.1:40411 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:29,166 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:55:29,166 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-205262694-148.251.75.209-1685530499838 (Datanode Uuid c1bec11b-5927-48ae-b61a-5a4b0da74803) service to localhost.localdomain/127.0.0.1:41421 2023-05-31 10:55:29,167 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:29,167 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data2/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:29,174 WARN [Listener at localhost.localdomain/34801] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:29,176 WARN [Listener at localhost.localdomain/34801] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:29,178 INFO [Listener at localhost.localdomain/34801] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:29,186 INFO [Listener at localhost.localdomain/34801] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_44795_datanode____.gnet8m/webapp 2023-05-31 10:55:29,262 INFO [Listener at localhost.localdomain/34801] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44795 2023-05-31 10:55:29,270 WARN [Listener at localhost.localdomain/42917] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:29,274 WARN [Listener at localhost.localdomain/42917] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:55:29,274 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:29,279 INFO [Listener at localhost.localdomain/42917] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:55:29,337 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6290909c8e96986: Processing first storage report for DS-cfaada67-9ff4-4681-b274-de87f0d6ea83 from datanode c1bec11b-5927-48ae-b61a-5a4b0da74803 2023-05-31 10:55:29,337 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6290909c8e96986: from storage DS-cfaada67-9ff4-4681-b274-de87f0d6ea83 node DatanodeRegistration(127.0.0.1:39681, datanodeUuid=c1bec11b-5927-48ae-b61a-5a4b0da74803, infoPort=41117, infoSecurePort=0, ipcPort=42917, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 10:55:29,337 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6290909c8e96986: Processing first storage report for DS-d4dbadee-0360-4aa3-8fdb-ddbd2ad293da from datanode c1bec11b-5927-48ae-b61a-5a4b0da74803 2023-05-31 10:55:29,337 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6290909c8e96986: from storage DS-d4dbadee-0360-4aa3-8fdb-ddbd2ad293da node DatanodeRegistration(127.0.0.1:39681, datanodeUuid=c1bec11b-5927-48ae-b61a-5a4b0da74803, infoPort=41117, infoSecurePort=0, ipcPort=42917, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:29,384 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-460343173_17 at /127.0.0.1:39856 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:44001:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39856 dst: /127.0.0.1:44001 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:29,386 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:55:29,386 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-205262694-148.251.75.209-1685530499838 (Datanode Uuid aeaeaae3-0177-4915-b49e-3a42a77f0c12) service to localhost.localdomain/127.0.0.1:41421 2023-05-31 10:55:29,387 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data3/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:29,388 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data4/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:29,395 WARN [Listener at localhost.localdomain/42917] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:29,398 WARN [Listener at localhost.localdomain/42917] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:29,399 INFO [Listener at localhost.localdomain/42917] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:29,405 INFO [Listener at localhost.localdomain/42917] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/java.io.tmpdir/Jetty_localhost_38705_datanode____2fzvby/webapp 2023-05-31 10:55:29,477 INFO [Listener at localhost.localdomain/42917] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38705 2023-05-31 10:55:29,483 WARN [Listener at localhost.localdomain/42301] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:29,539 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x513f54d24e578f08: Processing first storage report for DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376 from datanode aeaeaae3-0177-4915-b49e-3a42a77f0c12 2023-05-31 10:55:29,539 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x513f54d24e578f08: from storage DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376 node DatanodeRegistration(127.0.0.1:33953, datanodeUuid=aeaeaae3-0177-4915-b49e-3a42a77f0c12, infoPort=42139, infoSecurePort=0, ipcPort=42301, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 10:55:29,539 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x513f54d24e578f08: Processing first storage report for DS-c1b0d34a-36b5-4e12-af56-62e107b5bd71 from datanode aeaeaae3-0177-4915-b49e-3a42a77f0c12 2023-05-31 10:55:29,539 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x513f54d24e578f08: from storage DS-c1b0d34a-36b5-4e12-af56-62e107b5bd71 node DatanodeRegistration(127.0.0.1:33953, datanodeUuid=aeaeaae3-0177-4915-b49e-3a42a77f0c12, infoPort=42139, infoSecurePort=0, ipcPort=42301, storageInfo=lv=-57;cid=testClusterID;nsid=902018535;c=1685530499838), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:30,489 INFO [Listener at localhost.localdomain/42301] wal.TestLogRolling(498): Data Nodes restarted 2023-05-31 10:55:30,493 INFO [Listener at localhost.localdomain/42301] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-31 10:55:30,494 WARN [RS:0;jenkins-hbase20:44663.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44001,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,496 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44663%2C1685530500377:(num 1685530514955) roll requested 2023-05-31 10:55:30,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44663] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44001,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44663] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:52652 deadline: 1685530540494, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-31 10:55:30,511 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 newFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 2023-05-31 10:55:30,511 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-31 10:55:30,511 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 2023-05-31 10:55:30,511 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33953,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK], DatanodeInfoWithStorage[127.0.0.1:39681,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] 2023-05-31 10:55:30,511 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44001,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,511 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 is not closed yet, will try archiving it next time 2023-05-31 10:55:30,511 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44001,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,532 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,533 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C45649%2C1685530500335:(num 1685530500456) roll requested 2023-05-31 10:55:30,533 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,533 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,543 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-31 10:55:30,543 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335/jenkins-hbase20.apache.org%2C45649%2C1685530500335.1685530500456 with entries=88, filesize=43.81 KB; new WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335/jenkins-hbase20.apache.org%2C45649%2C1685530500335.1685530530533 2023-05-31 10:55:30,543 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33953,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK], DatanodeInfoWithStorage[127.0.0.1:39681,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] 2023-05-31 10:55:30,543 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:30,543 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335/jenkins-hbase20.apache.org%2C45649%2C1685530500335.1685530500456 is not closed yet, will try archiving it next time 2023-05-31 10:55:30,543 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335/jenkins-hbase20.apache.org%2C45649%2C1685530500335.1685530500456; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:42,584 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 newFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 2023-05-31 10:55:42,585 INFO [Listener at localhost.localdomain/42301] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 2023-05-31 10:55:42,596 DEBUG [Listener at localhost.localdomain/42301] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33953,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK], DatanodeInfoWithStorage[127.0.0.1:39681,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] 2023-05-31 10:55:42,596 DEBUG [Listener at localhost.localdomain/42301] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 is not closed yet, will try archiving it next time 2023-05-31 10:55:42,596 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 2023-05-31 10:55:42,597 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 2023-05-31 10:55:42,599 WARN [IPC Server handler 3 on default port 41421] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1015 2023-05-31 10:55:42,601 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 after 4ms 2023-05-31 10:55:43,573 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@a53b6cd] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-205262694-148.251.75.209-1685530499838:blk_1073741832_1015, datanode=DatanodeInfoWithStorage[127.0.0.1:33953,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data4/current/BP-205262694-148.251.75.209-1685530499838/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:46,602 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 after 4005ms 2023-05-31 10:55:46,602 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530500763 2023-05-31 10:55:46,616 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685530501383/Put/vlen=176/seqid=0] 2023-05-31 10:55:46,617 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #4: [default/info:d/1685530501437/Put/vlen=9/seqid=0] 2023-05-31 10:55:46,617 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #5: [hbase/info:d/1685530501459/Put/vlen=7/seqid=0] 2023-05-31 10:55:46,617 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685530501900/Put/vlen=232/seqid=0] 2023-05-31 10:55:46,617 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #4: [row1002/info:/1685530511542/Put/vlen=1045/seqid=0] 2023-05-31 10:55:46,617 DEBUG [Listener at localhost.localdomain/42301] wal.ProtobufLogReader(420): EOF at position 2162 2023-05-31 10:55:46,617 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 2023-05-31 10:55:46,617 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 2023-05-31 10:55:46,618 WARN [IPC Server handler 4 on default port 41421] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-31 10:55:46,618 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 after 1ms 2023-05-31 10:55:47,551 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@76487ad0] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-205262694-148.251.75.209-1685530499838:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:39681,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current/BP-205262694-148.251.75.209-1685530499838/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current/BP-205262694-148.251.75.209-1685530499838/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-31 10:55:50,620 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 after 4003ms 2023-05-31 10:55:50,620 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530514955 2023-05-31 10:55:50,629 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #6: [row1003/info:/1685530525045/Put/vlen=1045/seqid=0] 2023-05-31 10:55:50,629 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #7: [row1004/info:/1685530527052/Put/vlen=1045/seqid=0] 2023-05-31 10:55:50,629 DEBUG [Listener at localhost.localdomain/42301] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-31 10:55:50,629 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 2023-05-31 10:55:50,629 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 2023-05-31 10:55:50,630 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 after 1ms 2023-05-31 10:55:50,630 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530530496 2023-05-31 10:55:50,634 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(522): #9: [row1005/info:/1685530540567/Put/vlen=1045/seqid=0] 2023-05-31 10:55:50,635 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 2023-05-31 10:55:50,635 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 2023-05-31 10:55:50,635 WARN [IPC Server handler 0 on default port 41421] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-31 10:55:50,635 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 after 0ms 2023-05-31 10:55:51,546 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1831776409_17 at /127.0.0.1:43632 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:33953:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43632 dst: /127.0.0.1:33953 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33953 remote=/127.0.0.1:43632]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:51,547 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1831776409_17 at /127.0.0.1:47426 [Receiving block BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:39681:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47426 dst: /127.0.0.1:39681 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:51,546 WARN [ResponseProcessor for block BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 10:55:51,548 WARN [DataStreamer for file /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 block BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33953,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK], DatanodeInfoWithStorage[127.0.0.1:39681,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33953,DS-28bbf8c2-3863-4f36-8c92-12b0e0b6b376,DISK]) is bad. 2023-05-31 10:55:51,555 WARN [DataStreamer for file /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 block BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,636 INFO [Listener at localhost.localdomain/42301] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 after 4001ms 2023-05-31 10:55:54,636 DEBUG [Listener at localhost.localdomain/42301] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 2023-05-31 10:55:54,642 DEBUG [Listener at localhost.localdomain/42301] wal.ProtobufLogReader(420): EOF at position 83 2023-05-31 10:55:54,643 INFO [Listener at localhost.localdomain/42301] regionserver.HRegion(2745): Flushing 033f5aefd7b68aaa0d16cfdbc1d8c2ee 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-31 10:55:54,644 WARN [RS:0;jenkins-hbase20:44663.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,645 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44663%2C1685530500377:(num 1685530542571) roll requested 2023-05-31 10:55:54,645 DEBUG [Listener at localhost.localdomain/42301] regionserver.HRegion(2446): Flush status journal for 033f5aefd7b68aaa0d16cfdbc1d8c2ee: 2023-05-31 10:55:54,645 INFO [Listener at localhost.localdomain/42301] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,647 INFO [Listener at localhost.localdomain/42301] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-05-31 10:55:54,648 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,649 DEBUG [Listener at localhost.localdomain/42301] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 10:55:54,649 INFO [Listener at localhost.localdomain/42301] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,650 INFO [Listener at localhost.localdomain/42301] regionserver.HRegion(2745): Flushing 52dd25eac04998f3629384e93420de10 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 10:55:54,650 DEBUG [Listener at localhost.localdomain/42301] regionserver.HRegion(2446): Flush status journal for 52dd25eac04998f3629384e93420de10: 2023-05-31 10:55:54,650 INFO [Listener at localhost.localdomain/42301] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,653 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 10:55:54,653 INFO [Listener at localhost.localdomain/42301] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 10:55:54,654 DEBUG [Listener at localhost.localdomain/42301] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02099140 to 127.0.0.1:60515 2023-05-31 10:55:54,654 DEBUG [Listener at localhost.localdomain/42301] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:55:54,654 DEBUG [Listener at localhost.localdomain/42301] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 10:55:54,654 DEBUG [Listener at localhost.localdomain/42301] util.JVMClusterUtil(257): Found active master hash=826763674, stopped=false 2023-05-31 10:55:54,654 INFO [Listener at localhost.localdomain/42301] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:54,656 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:55:54,656 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:55:54,656 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:54,656 INFO [Listener at localhost.localdomain/42301] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 10:55:54,657 DEBUG [Listener at localhost.localdomain/42301] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x350a50e7 to 127.0.0.1:60515 2023-05-31 10:55:54,657 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:55:54,657 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:55:54,657 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 newFile=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530554645 2023-05-31 10:55:54,657 DEBUG [Listener at localhost.localdomain/42301] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:55:54,658 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-31 10:55:54,658 INFO [Listener at localhost.localdomain/42301] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44663,1685530500377' ***** 2023-05-31 10:55:54,658 INFO [Listener at localhost.localdomain/42301] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:55:54,658 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530554645 2023-05-31 10:55:54,658 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,658 INFO [RS:0;jenkins-hbase20:44663] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:55:54,658 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571 failed. Cause="Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-31 10:55:54,658 INFO [RS:0;jenkins-hbase20:44663] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:55:54,658 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,658 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:55:54,658 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,658 INFO [RS:0;jenkins-hbase20:44663] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:55:54,659 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(3303): Received CLOSE for 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:54,660 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(3303): Received CLOSE for 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:54,660 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:54,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 033f5aefd7b68aaa0d16cfdbc1d8c2ee, disabling compactions & flushes 2023-05-31 10:55:54,660 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:54,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,660 DEBUG [RS:0;jenkins-hbase20:44663] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c859253 to 127.0.0.1:60515 2023-05-31 10:55:54,660 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-31 10:55:54,660 DEBUG [RS:0;jenkins-hbase20:44663] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:55:54,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,661 INFO [RS:0;jenkins-hbase20:44663] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:55:54,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. after waiting 0 ms 2023-05-31 10:55:54,661 INFO [RS:0;jenkins-hbase20:44663] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:55:54,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,661 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:54,661 INFO [RS:0;jenkins-hbase20:44663] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:55:54,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 033f5aefd7b68aaa0d16cfdbc1d8c2ee 1/1 column families, dataSize=4.20 KB heapSize=4.98 KB 2023-05-31 10:55:54,662 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,662 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:55:54,662 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44803,DS-cfaada67-9ff4-4681-b274-de87f0d6ea83,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,662 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 10:55:54,662 ERROR [regionserver/jenkins-hbase20:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,44663,1685530500377: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,662 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:55:54,663 ERROR [regionserver/jenkins-hbase20:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-31 10:55:54,662 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,662 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1478): Online Regions={033f5aefd7b68aaa0d16cfdbc1d8c2ee=TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee., 1588230740=hbase:meta,,1.1588230740, 52dd25eac04998f3629384e93420de10=hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10.} 2023-05-31 10:55:54,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 033f5aefd7b68aaa0d16cfdbc1d8c2ee: 2023-05-31 10:55:54,663 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-31 10:55:54,663 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:55:54,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,663 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1504): Waiting on 1588230740, 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:54,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:55:54,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 52dd25eac04998f3629384e93420de10, disabling compactions & flushes 2023-05-31 10:55:54,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:55:54,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. after waiting 0 ms 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 52dd25eac04998f3629384e93420de10: 2023-05-31 10:55:54,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,664 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-31 10:55:54,664 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-31 10:55:54,664 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-31 10:55:54,664 INFO [regionserver/jenkins-hbase20:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1093140480, "init": 524288000, "max": 2051014656, "used": 370447736 }, "NonHeapMemoryUsage": { "committed": 139288576, "init": 2555904, "max": -1, "used": 136744344 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-31 10:55:54,665 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45649] master.MasterRpcServices(609): jenkins-hbase20.apache.org,44663,1685530500377 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,44663,1685530500377: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/WALs/jenkins-hbase20.apache.org,44663,1685530500377/jenkins-hbase20.apache.org%2C44663%2C1685530500377.1685530542571, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-205262694-148.251.75.209-1685530499838:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 10:55:54,665 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44663%2C1685530500377.meta:.meta(num 1685530500935) roll requested 2023-05-31 10:55:54,665 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-31 10:55:54,727 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-31 10:55:54,727 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-31 10:55:54,863 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(3303): Received CLOSE for 033f5aefd7b68aaa0d16cfdbc1d8c2ee 2023-05-31 10:55:54,864 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:55:54,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 033f5aefd7b68aaa0d16cfdbc1d8c2ee, disabling compactions & flushes 2023-05-31 10:55:54,864 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(3303): Received CLOSE for 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:54,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,865 DEBUG [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1504): Waiting on 033f5aefd7b68aaa0d16cfdbc1d8c2ee, 1588230740, 52dd25eac04998f3629384e93420de10 2023-05-31 10:55:54,864 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:55:54,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,865 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:55:54,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. after waiting 0 ms 2023-05-31 10:55:54,866 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:55:54,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,866 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:55:54,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 033f5aefd7b68aaa0d16cfdbc1d8c2ee: 2023-05-31 10:55:54,866 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:55:54,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685530501521.033f5aefd7b68aaa0d16cfdbc1d8c2ee. 2023-05-31 10:55:54,867 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:55:54,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 52dd25eac04998f3629384e93420de10, disabling compactions & flushes 2023-05-31 10:55:54,867 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 10:55:54,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. after waiting 0 ms 2023-05-31 10:55:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 52dd25eac04998f3629384e93420de10: 2023-05-31 10:55:54,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685530501004.52dd25eac04998f3629384e93420de10. 2023-05-31 10:55:55,065 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-31 10:55:55,065 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44663,1685530500377; all regions closed. 2023-05-31 10:55:55,065 DEBUG [RS:0;jenkins-hbase20:44663] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:55:55,065 INFO [RS:0;jenkins-hbase20:44663] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:55:55,066 INFO [RS:0;jenkins-hbase20:44663] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 10:55:55,066 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:55:55,067 INFO [RS:0;jenkins-hbase20:44663] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44663 2023-05-31 10:55:55,070 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:55:55,070 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44663,1685530500377 2023-05-31 10:55:55,071 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:55:55,072 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44663,1685530500377] 2023-05-31 10:55:55,072 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44663,1685530500377; numProcessing=1 2023-05-31 10:55:55,073 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44663,1685530500377 already deleted, retry=false 2023-05-31 10:55:55,073 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44663,1685530500377 expired; onlineServers=0 2023-05-31 10:55:55,073 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,45649,1685530500335' ***** 2023-05-31 10:55:55,073 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 10:55:55,073 DEBUG [M:0;jenkins-hbase20:45649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3578eb6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:55:55,073 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:55,074 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45649,1685530500335; all regions closed. 2023-05-31 10:55:55,074 DEBUG [M:0;jenkins-hbase20:45649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:55:55,074 DEBUG [M:0;jenkins-hbase20:45649] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 10:55:55,074 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 10:55:55,074 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530500540] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530500540,5,FailOnTimeoutGroup] 2023-05-31 10:55:55,074 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530500540] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530500540,5,FailOnTimeoutGroup] 2023-05-31 10:55:55,074 DEBUG [M:0;jenkins-hbase20:45649] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 10:55:55,077 INFO [M:0;jenkins-hbase20:45649] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 10:55:55,077 INFO [M:0;jenkins-hbase20:45649] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 10:55:55,077 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 10:55:55,077 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:55,077 INFO [M:0;jenkins-hbase20:45649] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-31 10:55:55,077 DEBUG [M:0;jenkins-hbase20:45649] master.HMaster(1512): Stopping service threads 2023-05-31 10:55:55,078 INFO [M:0;jenkins-hbase20:45649] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 10:55:55,078 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:55:55,078 ERROR [M:0;jenkins-hbase20:45649] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 10:55:55,078 INFO [M:0;jenkins-hbase20:45649] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 10:55:55,079 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 10:55:55,079 DEBUG [M:0;jenkins-hbase20:45649] zookeeper.ZKUtil(398): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 10:55:55,079 WARN [M:0;jenkins-hbase20:45649] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 10:55:55,079 INFO [M:0;jenkins-hbase20:45649] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 10:55:55,080 INFO [M:0;jenkins-hbase20:45649] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 10:55:55,080 DEBUG [M:0;jenkins-hbase20:45649] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:55:55,080 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:55,081 DEBUG [M:0;jenkins-hbase20:45649] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:55,081 DEBUG [M:0;jenkins-hbase20:45649] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:55:55,081 DEBUG [M:0;jenkins-hbase20:45649] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:55,081 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.18 KB heapSize=45.83 KB 2023-05-31 10:55:55,097 INFO [M:0;jenkins-hbase20:45649] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.18 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/97869d809ce74908b1f3e2f0dee3d52b 2023-05-31 10:55:55,103 DEBUG [M:0;jenkins-hbase20:45649] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/97869d809ce74908b1f3e2f0dee3d52b as hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/97869d809ce74908b1f3e2f0dee3d52b 2023-05-31 10:55:55,109 INFO [M:0;jenkins-hbase20:45649] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41421/user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/97869d809ce74908b1f3e2f0dee3d52b, entries=11, sequenceid=92, filesize=7.0 K 2023-05-31 10:55:55,110 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegion(2948): Finished flush of dataSize ~38.18 KB/39101, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=92, compaction requested=false 2023-05-31 10:55:55,111 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:55,111 DEBUG [M:0;jenkins-hbase20:45649] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:55:55,112 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e59f194f-a1c0-49e3-620e-c842a9f0f2cf/MasterData/WALs/jenkins-hbase20.apache.org,45649,1685530500335 2023-05-31 10:55:55,115 INFO [M:0;jenkins-hbase20:45649] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 10:55:55,115 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:55:55,115 INFO [M:0;jenkins-hbase20:45649] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45649 2023-05-31 10:55:55,117 DEBUG [M:0;jenkins-hbase20:45649] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,45649,1685530500335 already deleted, retry=false 2023-05-31 10:55:55,172 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:55:55,172 INFO [RS:0;jenkins-hbase20:44663] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44663,1685530500377; zookeeper connection closed. 2023-05-31 10:55:55,172 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): regionserver:44663-0x101a12875020001, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:55:55,173 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@647d79f8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@647d79f8 2023-05-31 10:55:55,177 INFO [Listener at localhost.localdomain/42301] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 10:55:55,272 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:55:55,273 DEBUG [Listener at localhost.localdomain/42965-EventThread] zookeeper.ZKWatcher(600): master:45649-0x101a12875020000, quorum=127.0.0.1:60515, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:55:55,272 INFO [M:0;jenkins-hbase20:45649] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45649,1685530500335; zookeeper connection closed. 2023-05-31 10:55:55,276 WARN [Listener at localhost.localdomain/42301] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:55:55,284 INFO [Listener at localhost.localdomain/42301] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:55:55,394 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:55:55,394 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-205262694-148.251.75.209-1685530499838 (Datanode Uuid aeaeaae3-0177-4915-b49e-3a42a77f0c12) service to localhost.localdomain/127.0.0.1:41421 2023-05-31 10:55:55,395 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data3/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:55,395 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data4/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:55,397 WARN [Listener at localhost.localdomain/42301] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:55:55,401 INFO [Listener at localhost.localdomain/42301] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:55:55,507 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:55:55,508 WARN [BP-205262694-148.251.75.209-1685530499838 heartbeating to localhost.localdomain/127.0.0.1:41421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-205262694-148.251.75.209-1685530499838 (Datanode Uuid c1bec11b-5927-48ae-b61a-5a4b0da74803) service to localhost.localdomain/127.0.0.1:41421 2023-05-31 10:55:55,508 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data1/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:55,509 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/cluster_51199b14-5bb5-ea9c-a50b-6e457aca193b/dfs/data/data2/current/BP-205262694-148.251.75.209-1685530499838] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:55:55,522 INFO [Listener at localhost.localdomain/42301] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 10:55:55,639 INFO [Listener at localhost.localdomain/42301] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 10:55:55,652 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 10:55:55,661 INFO [Listener at localhost.localdomain/42301] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 78) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:41421 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:41421 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/42301 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1224493049) connection to localhost.localdomain/127.0.0.1:41421 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:41421 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41421 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) - Thread LEAK? -, OpenFileDescriptor=461 (was 472), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=139 (was 142), ProcessCount=166 (was 168), AvailableMemoryMB=8576 (was 8316) - AvailableMemoryMB LEAK? - 2023-05-31 10:55:55,669 INFO [Listener at localhost.localdomain/42301] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=461, MaxFileDescriptor=60000, SystemLoadAverage=139, ProcessCount=166, AvailableMemoryMB=8576 2023-05-31 10:55:55,669 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 10:55:55,669 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/hadoop.log.dir so I do NOT create it in target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19 2023-05-31 10:55:55,669 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0936408f-cbf8-47bc-25bc-0a80f519c9dc/hadoop.tmp.dir so I do NOT create it in target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19 2023-05-31 10:55:55,669 INFO [Listener at localhost.localdomain/42301] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e, deleteOnExit=true 2023-05-31 10:55:55,669 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 10:55:55,670 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/test.cache.data in system properties and HBase conf 2023-05-31 10:55:55,670 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 10:55:55,670 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/hadoop.log.dir in system properties and HBase conf 2023-05-31 10:55:55,670 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 10:55:55,670 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 10:55:55,670 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 10:55:55,670 DEBUG [Listener at localhost.localdomain/42301] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:55:55,671 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:55:55,672 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 10:55:55,672 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/nfs.dump.dir in system properties and HBase conf 2023-05-31 10:55:55,672 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/java.io.tmpdir in system properties and HBase conf 2023-05-31 10:55:55,672 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:55:55,672 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 10:55:55,672 INFO [Listener at localhost.localdomain/42301] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 10:55:55,673 WARN [Listener at localhost.localdomain/42301] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:55:55,675 WARN [Listener at localhost.localdomain/42301] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:55:55,675 WARN [Listener at localhost.localdomain/42301] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:55:55,701 WARN [Listener at localhost.localdomain/42301] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:55,702 INFO [Listener at localhost.localdomain/42301] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:55,710 INFO [Listener at localhost.localdomain/42301] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/java.io.tmpdir/Jetty_localhost_localdomain_37983_hdfs____.x2bx92/webapp 2023-05-31 10:55:55,782 INFO [Listener at localhost.localdomain/42301] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37983 2023-05-31 10:55:55,784 WARN [Listener at localhost.localdomain/42301] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:55:55,785 WARN [Listener at localhost.localdomain/42301] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:55:55,785 WARN [Listener at localhost.localdomain/42301] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:55:55,809 WARN [Listener at localhost.localdomain/40915] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:55,816 WARN [Listener at localhost.localdomain/40915] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:55,818 WARN [Listener at localhost.localdomain/40915] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:55,819 INFO [Listener at localhost.localdomain/40915] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:55,826 INFO [Listener at localhost.localdomain/40915] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/java.io.tmpdir/Jetty_localhost_42787_datanode____.13v6qp/webapp 2023-05-31 10:55:55,905 INFO [Listener at localhost.localdomain/40915] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42787 2023-05-31 10:55:55,910 WARN [Listener at localhost.localdomain/41303] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:55,920 WARN [Listener at localhost.localdomain/41303] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:55:55,922 WARN [Listener at localhost.localdomain/41303] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:55:55,923 INFO [Listener at localhost.localdomain/41303] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:55:55,926 INFO [Listener at localhost.localdomain/41303] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/java.io.tmpdir/Jetty_localhost_38001_datanode____.alxwtp/webapp 2023-05-31 10:55:55,974 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x26069672267e65b2: Processing first storage report for DS-ff7982ed-e36e-413a-947d-c86db0873a3d from datanode 128bd9b3-52b8-47f1-8c6d-51aa0160b2ae 2023-05-31 10:55:55,974 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x26069672267e65b2: from storage DS-ff7982ed-e36e-413a-947d-c86db0873a3d node DatanodeRegistration(127.0.0.1:45449, datanodeUuid=128bd9b3-52b8-47f1-8c6d-51aa0160b2ae, infoPort=41571, infoSecurePort=0, ipcPort=41303, storageInfo=lv=-57;cid=testClusterID;nsid=1585084807;c=1685530555677), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:55,974 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x26069672267e65b2: Processing first storage report for DS-a3fe0b0f-d7cc-4749-ac70-2e8391e6b0e7 from datanode 128bd9b3-52b8-47f1-8c6d-51aa0160b2ae 2023-05-31 10:55:55,974 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x26069672267e65b2: from storage DS-a3fe0b0f-d7cc-4749-ac70-2e8391e6b0e7 node DatanodeRegistration(127.0.0.1:45449, datanodeUuid=128bd9b3-52b8-47f1-8c6d-51aa0160b2ae, infoPort=41571, infoSecurePort=0, ipcPort=41303, storageInfo=lv=-57;cid=testClusterID;nsid=1585084807;c=1685530555677), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:56,006 INFO [Listener at localhost.localdomain/41303] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38001 2023-05-31 10:55:56,012 WARN [Listener at localhost.localdomain/34053] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:55:56,108 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdfdf31b13e769a8b: Processing first storage report for DS-526f11f0-1168-4dfa-96bb-a967f5afcec2 from datanode b2fdce09-9317-4e29-b908-514e55b45abd 2023-05-31 10:55:56,108 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdfdf31b13e769a8b: from storage DS-526f11f0-1168-4dfa-96bb-a967f5afcec2 node DatanodeRegistration(127.0.0.1:37191, datanodeUuid=b2fdce09-9317-4e29-b908-514e55b45abd, infoPort=33331, infoSecurePort=0, ipcPort=34053, storageInfo=lv=-57;cid=testClusterID;nsid=1585084807;c=1685530555677), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:56,108 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdfdf31b13e769a8b: Processing first storage report for DS-893eeda8-1674-48f1-9c0f-db553f20078e from datanode b2fdce09-9317-4e29-b908-514e55b45abd 2023-05-31 10:55:56,108 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdfdf31b13e769a8b: from storage DS-893eeda8-1674-48f1-9c0f-db553f20078e node DatanodeRegistration(127.0.0.1:37191, datanodeUuid=b2fdce09-9317-4e29-b908-514e55b45abd, infoPort=33331, infoSecurePort=0, ipcPort=34053, storageInfo=lv=-57;cid=testClusterID;nsid=1585084807;c=1685530555677), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:55:56,121 DEBUG [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19 2023-05-31 10:55:56,123 INFO [Listener at localhost.localdomain/34053] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/zookeeper_0, clientPort=49620, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 10:55:56,124 INFO [Listener at localhost.localdomain/34053] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49620 2023-05-31 10:55:56,125 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,126 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,140 INFO [Listener at localhost.localdomain/34053] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be with version=8 2023-05-31 10:55:56,141 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase-staging 2023-05-31 10:55:56,142 INFO [Listener at localhost.localdomain/34053] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:55:56,142 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:56,142 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:56,142 INFO [Listener at localhost.localdomain/34053] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:55:56,142 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:56,143 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:55:56,143 INFO [Listener at localhost.localdomain/34053] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:55:56,144 INFO [Listener at localhost.localdomain/34053] ipc.NettyRpcServer(120): Bind to /148.251.75.209:35127 2023-05-31 10:55:56,144 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,145 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,146 INFO [Listener at localhost.localdomain/34053] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35127 connecting to ZooKeeper ensemble=127.0.0.1:49620 2023-05-31 10:55:56,151 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:351270x0, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:55:56,152 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35127-0x101a1294f060000 connected 2023-05-31 10:55:56,162 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:55:56,163 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:55:56,163 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:55:56,164 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35127 2023-05-31 10:55:56,164 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35127 2023-05-31 10:55:56,164 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35127 2023-05-31 10:55:56,164 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35127 2023-05-31 10:55:56,165 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35127 2023-05-31 10:55:56,165 INFO [Listener at localhost.localdomain/34053] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be, hbase.cluster.distributed=false 2023-05-31 10:55:56,179 INFO [Listener at localhost.localdomain/34053] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:55:56,179 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:56,179 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:56,179 INFO [Listener at localhost.localdomain/34053] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:55:56,179 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:55:56,179 INFO [Listener at localhost.localdomain/34053] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:55:56,180 INFO [Listener at localhost.localdomain/34053] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:55:56,181 INFO [Listener at localhost.localdomain/34053] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39533 2023-05-31 10:55:56,181 INFO [Listener at localhost.localdomain/34053] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:55:56,182 DEBUG [Listener at localhost.localdomain/34053] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:55:56,182 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,183 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,183 INFO [Listener at localhost.localdomain/34053] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39533 connecting to ZooKeeper ensemble=127.0.0.1:49620 2023-05-31 10:55:56,187 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:395330x0, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:55:56,188 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ZKUtil(164): regionserver:395330x0, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:55:56,189 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39533-0x101a1294f060001 connected 2023-05-31 10:55:56,189 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:55:56,190 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:55:56,190 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39533 2023-05-31 10:55:56,190 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39533 2023-05-31 10:55:56,191 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39533 2023-05-31 10:55:56,191 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39533 2023-05-31 10:55:56,191 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39533 2023-05-31 10:55:56,192 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,203 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:55:56,203 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,213 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:55:56,213 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:55:56,213 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,214 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:55:56,215 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,35127,1685530556142 from backup master directory 2023-05-31 10:55:56,215 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:55:56,219 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,219 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:55:56,219 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:55:56,219 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,238 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/hbase.id with ID: b319f719-c03c-4d4f-9a53-21cc76f8d7e8 2023-05-31 10:55:56,248 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:56,250 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x108876a2 to 127.0.0.1:49620 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:55:56,263 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c40c9d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:55:56,264 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:56,264 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 10:55:56,264 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:55:56,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store-tmp 2023-05-31 10:55:56,278 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:56,278 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:55:56,278 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:56,278 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:56,278 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:55:56,278 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:56,278 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:55:56,278 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:55:56,279 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/WALs/jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,281 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C35127%2C1685530556142, suffix=, logDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/WALs/jenkins-hbase20.apache.org,35127,1685530556142, archiveDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/oldWALs, maxLogs=10 2023-05-31 10:55:56,289 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/WALs/jenkins-hbase20.apache.org,35127,1685530556142/jenkins-hbase20.apache.org%2C35127%2C1685530556142.1685530556281 2023-05-31 10:55:56,289 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37191,DS-526f11f0-1168-4dfa-96bb-a967f5afcec2,DISK], DatanodeInfoWithStorage[127.0.0.1:45449,DS-ff7982ed-e36e-413a-947d-c86db0873a3d,DISK]] 2023-05-31 10:55:56,289 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:56,290 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:56,290 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:56,290 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:56,292 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:56,294 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 10:55:56,294 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 10:55:56,295 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,296 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:56,296 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:56,300 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:55:56,302 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:56,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=718998, jitterRate=-0.08574734628200531}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:55:56,303 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:55:56,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 10:55:56,305 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 10:55:56,305 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 10:55:56,305 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 10:55:56,306 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 10:55:56,306 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 10:55:56,306 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 10:55:56,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 10:55:56,309 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 10:55:56,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 10:55:56,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 10:55:56,319 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 10:55:56,319 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 10:55:56,320 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 10:55:56,322 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,322 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 10:55:56,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 10:55:56,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 10:55:56,324 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:55:56,324 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:55:56,324 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,325 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,35127,1685530556142, sessionid=0x101a1294f060000, setting cluster-up flag (Was=false) 2023-05-31 10:55:56,327 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,329 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 10:55:56,330 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,331 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,334 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 10:55:56,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:56,335 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.hbase-snapshot/.tmp 2023-05-31 10:55:56,337 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 10:55:56,337 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:56,337 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:56,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:56,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:55:56,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-31 10:55:56,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:55:56,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685530586342 2023-05-31 10:55:56,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 10:55:56,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 10:55:56,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 10:55:56,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 10:55:56,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 10:55:56,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 10:55:56,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,343 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:55:56,344 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 10:55:56,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 10:55:56,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 10:55:56,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 10:55:56,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 10:55:56,344 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 10:55:56,345 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530556344,5,FailOnTimeoutGroup] 2023-05-31 10:55:56,345 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530556345,5,FailOnTimeoutGroup] 2023-05-31 10:55:56,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 10:55:56,345 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:56,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,357 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:56,358 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:56,358 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be 2023-05-31 10:55:56,364 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:56,365 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:55:56,367 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/info 2023-05-31 10:55:56,367 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:55:56,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:55:56,369 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:55:56,369 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:55:56,369 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,369 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:55:56,370 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/table 2023-05-31 10:55:56,371 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:55:56,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,372 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740 2023-05-31 10:55:56,372 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740 2023-05-31 10:55:56,374 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:55:56,375 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:55:56,376 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:56,377 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=860409, jitterRate=0.09406682848930359}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:55:56,377 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:55:56,377 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:55:56,377 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:55:56,377 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:55:56,377 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:55:56,377 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:55:56,377 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:55:56,377 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:55:56,378 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:55:56,378 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 10:55:56,378 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 10:55:56,380 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 10:55:56,381 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 10:55:56,394 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(951): ClusterId : b319f719-c03c-4d4f-9a53-21cc76f8d7e8 2023-05-31 10:55:56,394 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:55:56,397 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:55:56,397 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:55:56,399 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:55:56,400 DEBUG [RS:0;jenkins-hbase20:39533] zookeeper.ReadOnlyZKClient(139): Connect 0x46dd1ae1 to 127.0.0.1:49620 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:55:56,404 DEBUG [RS:0;jenkins-hbase20:39533] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@481aec4a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:55:56,405 DEBUG [RS:0;jenkins-hbase20:39533] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c257191, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:55:56,416 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:39533 2023-05-31 10:55:56,416 INFO [RS:0;jenkins-hbase20:39533] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:55:56,416 INFO [RS:0;jenkins-hbase20:39533] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:55:56,416 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:55:56,417 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,35127,1685530556142 with isa=jenkins-hbase20.apache.org/148.251.75.209:39533, startcode=1685530556178 2023-05-31 10:55:56,417 DEBUG [RS:0;jenkins-hbase20:39533] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:55:56,421 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54691, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:55:56,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,422 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be 2023-05-31 10:55:56,422 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40915 2023-05-31 10:55:56,422 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:55:56,423 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:55:56,424 DEBUG [RS:0;jenkins-hbase20:39533] zookeeper.ZKUtil(162): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,424 WARN [RS:0;jenkins-hbase20:39533] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:55:56,424 INFO [RS:0;jenkins-hbase20:39533] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:55:56,424 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,39533,1685530556178] 2023-05-31 10:55:56,424 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,429 DEBUG [RS:0;jenkins-hbase20:39533] zookeeper.ZKUtil(162): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,429 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:55:56,430 INFO [RS:0;jenkins-hbase20:39533] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:55:56,431 INFO [RS:0;jenkins-hbase20:39533] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:55:56,431 INFO [RS:0;jenkins-hbase20:39533] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:55:56,431 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,431 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:55:56,433 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 DEBUG [RS:0;jenkins-hbase20:39533] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:55:56,434 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,435 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,435 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,447 INFO [RS:0;jenkins-hbase20:39533] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:55:56,447 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39533,1685530556178-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,455 INFO [RS:0;jenkins-hbase20:39533] regionserver.Replication(203): jenkins-hbase20.apache.org,39533,1685530556178 started 2023-05-31 10:55:56,456 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,39533,1685530556178, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:39533, sessionid=0x101a1294f060001 2023-05-31 10:55:56,456 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:55:56,456 DEBUG [RS:0;jenkins-hbase20:39533] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,456 DEBUG [RS:0;jenkins-hbase20:39533] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39533,1685530556178' 2023-05-31 10:55:56,456 DEBUG [RS:0;jenkins-hbase20:39533] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:55:56,456 DEBUG [RS:0;jenkins-hbase20:39533] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:55:56,457 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:55:56,457 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:55:56,457 DEBUG [RS:0;jenkins-hbase20:39533] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,457 DEBUG [RS:0;jenkins-hbase20:39533] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39533,1685530556178' 2023-05-31 10:55:56,457 DEBUG [RS:0;jenkins-hbase20:39533] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:55:56,457 DEBUG [RS:0;jenkins-hbase20:39533] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:55:56,458 DEBUG [RS:0;jenkins-hbase20:39533] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:55:56,458 INFO [RS:0;jenkins-hbase20:39533] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:55:56,458 INFO [RS:0;jenkins-hbase20:39533] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:55:56,531 DEBUG [jenkins-hbase20:35127] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 10:55:56,532 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39533,1685530556178, state=OPENING 2023-05-31 10:55:56,534 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 10:55:56,535 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:56,537 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39533,1685530556178}] 2023-05-31 10:55:56,537 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:55:56,559 INFO [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39533%2C1685530556178, suffix=, logDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178, archiveDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/oldWALs, maxLogs=32 2023-05-31 10:55:56,567 INFO [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530556560 2023-05-31 10:55:56,567 DEBUG [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45449,DS-ff7982ed-e36e-413a-947d-c86db0873a3d,DISK], DatanodeInfoWithStorage[127.0.0.1:37191,DS-526f11f0-1168-4dfa-96bb-a967f5afcec2,DISK]] 2023-05-31 10:55:56,641 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:55:56,695 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,695 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:55:56,702 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57964, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:55:56,708 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 10:55:56,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:55:56,711 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39533%2C1685530556178.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178, archiveDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/oldWALs, maxLogs=32 2023-05-31 10:55:56,729 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.meta.1685530556711.meta 2023-05-31 10:55:56,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37191,DS-526f11f0-1168-4dfa-96bb-a967f5afcec2,DISK], DatanodeInfoWithStorage[127.0.0.1:45449,DS-ff7982ed-e36e-413a-947d-c86db0873a3d,DISK]] 2023-05-31 10:55:56,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:56,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 10:55:56,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 10:55:56,730 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 10:55:56,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 10:55:56,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:56,731 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 10:55:56,731 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 10:55:56,732 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:55:56,733 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/info 2023-05-31 10:55:56,733 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/info 2023-05-31 10:55:56,734 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:55:56,734 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,734 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:55:56,735 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:55:56,736 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:55:56,736 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:55:56,736 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:55:56,738 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/table 2023-05-31 10:55:56,738 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/table 2023-05-31 10:55:56,738 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:55:56,739 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:56,739 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740 2023-05-31 10:55:56,740 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740 2023-05-31 10:55:56,742 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:55:56,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:55:56,744 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=703890, jitterRate=-0.10495774447917938}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:55:56,744 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:55:56,746 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685530556695 2023-05-31 10:55:56,750 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 10:55:56,751 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 10:55:56,751 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39533,1685530556178, state=OPEN 2023-05-31 10:55:56,753 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 10:55:56,753 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:55:56,755 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 10:55:56,755 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39533,1685530556178 in 216 msec 2023-05-31 10:55:56,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 10:55:56,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 377 msec 2023-05-31 10:55:56,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 422 msec 2023-05-31 10:55:56,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685530556760, completionTime=-1 2023-05-31 10:55:56,760 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 10:55:56,760 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 10:55:56,764 DEBUG [hconnection-0x7ab477be-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:55:56,765 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:55:56,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 10:55:56,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685530616767 2023-05-31 10:55:56,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685530676767 2023-05-31 10:55:56,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35127,1685530556142-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35127,1685530556142-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35127,1685530556142-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:35127, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 10:55:56,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:56,773 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 10:55:56,773 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 10:55:56,775 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:55:56,776 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:55:56,778 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:56,779 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b empty. 2023-05-31 10:55:56,780 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:56,780 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 10:55:56,796 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:56,797 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6274e512daae1bed1f549f8d51e72d0b, NAME => 'hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp 2023-05-31 10:55:56,807 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:56,807 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6274e512daae1bed1f549f8d51e72d0b, disabling compactions & flushes 2023-05-31 10:55:56,807 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:56,807 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:56,807 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. after waiting 0 ms 2023-05-31 10:55:56,807 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:56,807 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:56,807 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6274e512daae1bed1f549f8d51e72d0b: 2023-05-31 10:55:56,809 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:55:56,810 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530556810"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530556810"}]},"ts":"1685530556810"} 2023-05-31 10:55:56,813 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:55:56,814 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:55:56,814 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530556814"}]},"ts":"1685530556814"} 2023-05-31 10:55:56,815 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 10:55:56,819 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6274e512daae1bed1f549f8d51e72d0b, ASSIGN}] 2023-05-31 10:55:56,821 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6274e512daae1bed1f549f8d51e72d0b, ASSIGN 2023-05-31 10:55:56,822 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6274e512daae1bed1f549f8d51e72d0b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39533,1685530556178; forceNewPlan=false, retain=false 2023-05-31 10:55:56,975 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6274e512daae1bed1f549f8d51e72d0b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:56,975 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530556975"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530556975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530556975"}]},"ts":"1685530556975"} 2023-05-31 10:55:56,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 6274e512daae1bed1f549f8d51e72d0b, server=jenkins-hbase20.apache.org,39533,1685530556178}] 2023-05-31 10:55:57,139 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:57,140 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6274e512daae1bed1f549f8d51e72d0b, NAME => 'hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:57,140 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,141 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:57,141 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,141 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,143 INFO [StoreOpener-6274e512daae1bed1f549f8d51e72d0b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,145 DEBUG [StoreOpener-6274e512daae1bed1f549f8d51e72d0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/info 2023-05-31 10:55:57,145 DEBUG [StoreOpener-6274e512daae1bed1f549f8d51e72d0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/info 2023-05-31 10:55:57,146 INFO [StoreOpener-6274e512daae1bed1f549f8d51e72d0b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6274e512daae1bed1f549f8d51e72d0b columnFamilyName info 2023-05-31 10:55:57,147 INFO [StoreOpener-6274e512daae1bed1f549f8d51e72d0b-1] regionserver.HStore(310): Store=6274e512daae1bed1f549f8d51e72d0b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:57,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:55:57,156 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:57,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 6274e512daae1bed1f549f8d51e72d0b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=804300, jitterRate=0.02272053062915802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:55:57,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 6274e512daae1bed1f549f8d51e72d0b: 2023-05-31 10:55:57,159 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b., pid=6, masterSystemTime=1685530557129 2023-05-31 10:55:57,161 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:57,161 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:55:57,163 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6274e512daae1bed1f549f8d51e72d0b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:57,163 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530557162"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530557162"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530557162"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530557162"}]},"ts":"1685530557162"} 2023-05-31 10:55:57,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 10:55:57,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 6274e512daae1bed1f549f8d51e72d0b, server=jenkins-hbase20.apache.org,39533,1685530556178 in 188 msec 2023-05-31 10:55:57,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 10:55:57,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6274e512daae1bed1f549f8d51e72d0b, ASSIGN in 348 msec 2023-05-31 10:55:57,171 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:55:57,171 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530557171"}]},"ts":"1685530557171"} 2023-05-31 10:55:57,173 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 10:55:57,175 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 10:55:57,175 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:55:57,175 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:55:57,176 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:57,178 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 403 msec 2023-05-31 10:55:57,179 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 10:55:57,196 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:55:57,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 20 msec 2023-05-31 10:55:57,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 10:55:57,213 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:55:57,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-05-31 10:55:57,230 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 10:55:57,233 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 10:55:57,233 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.014sec 2023-05-31 10:55:57,233 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 10:55:57,233 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 10:55:57,233 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 10:55:57,233 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35127,1685530556142-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 10:55:57,233 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35127,1685530556142-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 10:55:57,237 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 10:55:57,296 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ReadOnlyZKClient(139): Connect 0x72a2ab3b to 127.0.0.1:49620 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:55:57,301 DEBUG [Listener at localhost.localdomain/34053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@655dcbda, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:55:57,305 DEBUG [hconnection-0x5567adb8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:55:57,309 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:55:57,311 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:55:57,311 INFO [Listener at localhost.localdomain/34053] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:55:57,314 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 10:55:57,314 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:55:57,314 INFO [Listener at localhost.localdomain/34053] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 10:55:57,316 DEBUG [Listener at localhost.localdomain/34053] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 10:55:57,320 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 10:55:57,321 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 10:55:57,321 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 10:55:57,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:55:57,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:55:57,325 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:55:57,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-31 10:55:57,326 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:55:57,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:55:57,327 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,328 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d empty. 2023-05-31 10:55:57,328 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,328 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-31 10:55:57,339 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 10:55:57,340 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 46845520ed600006d81badfba6c65c5d, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/.tmp 2023-05-31 10:55:57,347 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:57,347 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 46845520ed600006d81badfba6c65c5d, disabling compactions & flushes 2023-05-31 10:55:57,347 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,347 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,347 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. after waiting 0 ms 2023-05-31 10:55:57,347 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,347 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,347 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:55:57,350 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:55:57,351 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685530557351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530557351"}]},"ts":"1685530557351"} 2023-05-31 10:55:57,353 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:55:57,354 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:55:57,354 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530557354"}]},"ts":"1685530557354"} 2023-05-31 10:55:57,356 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-31 10:55:57,360 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=46845520ed600006d81badfba6c65c5d, ASSIGN}] 2023-05-31 10:55:57,362 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=46845520ed600006d81badfba6c65c5d, ASSIGN 2023-05-31 10:55:57,364 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=46845520ed600006d81badfba6c65c5d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39533,1685530556178; forceNewPlan=false, retain=false 2023-05-31 10:55:57,515 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=46845520ed600006d81badfba6c65c5d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:57,516 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685530557515"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530557515"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530557515"}]},"ts":"1685530557515"} 2023-05-31 10:55:57,520 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 46845520ed600006d81badfba6c65c5d, server=jenkins-hbase20.apache.org,39533,1685530556178}] 2023-05-31 10:55:57,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 46845520ed600006d81badfba6c65c5d, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:55:57,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:55:57,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,680 INFO [StoreOpener-46845520ed600006d81badfba6c65c5d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,682 DEBUG [StoreOpener-46845520ed600006d81badfba6c65c5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info 2023-05-31 10:55:57,682 DEBUG [StoreOpener-46845520ed600006d81badfba6c65c5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info 2023-05-31 10:55:57,682 INFO [StoreOpener-46845520ed600006d81badfba6c65c5d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 46845520ed600006d81badfba6c65c5d columnFamilyName info 2023-05-31 10:55:57,683 INFO [StoreOpener-46845520ed600006d81badfba6c65c5d-1] regionserver.HStore(310): Store=46845520ed600006d81badfba6c65c5d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:55:57,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 46845520ed600006d81badfba6c65c5d 2023-05-31 10:55:57,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:55:57,690 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 46845520ed600006d81badfba6c65c5d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=727962, jitterRate=-0.07434919476509094}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:55:57,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:55:57,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d., pid=11, masterSystemTime=1685530557675 2023-05-31 10:55:57,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,693 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:55:57,693 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=46845520ed600006d81badfba6c65c5d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:55:57,694 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685530557693"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530557693"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530557693"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530557693"}]},"ts":"1685530557693"} 2023-05-31 10:55:57,697 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 10:55:57,698 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 46845520ed600006d81badfba6c65c5d, server=jenkins-hbase20.apache.org,39533,1685530556178 in 175 msec 2023-05-31 10:55:57,700 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 10:55:57,700 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=46845520ed600006d81badfba6c65c5d, ASSIGN in 338 msec 2023-05-31 10:55:57,700 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:55:57,700 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530557700"}]},"ts":"1685530557700"} 2023-05-31 10:55:57,702 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-31 10:55:57,705 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:55:57,706 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 383 msec 2023-05-31 10:56:02,288 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 10:56:02,430 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:07,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:56:07,328 INFO [Listener at localhost.localdomain/34053] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-31 10:56:07,332 DEBUG [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:07,332 DEBUG [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:07,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-31 10:56:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-31 10:56:07,357 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-31 10:56:07,357 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:07,358 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-31 10:56:07,358 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-31 10:56:07,358 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,359 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 10:56:07,360 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:07,360 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,360 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:07,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:07,360 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,360 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 10:56:07,360 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 10:56:07,360 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,361 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 10:56:07,361 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 10:56:07,361 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-31 10:56:07,363 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-31 10:56:07,363 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-31 10:56:07,363 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:07,363 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-31 10:56:07,364 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 10:56:07,364 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 10:56:07,364 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:07,364 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. started... 2023-05-31 10:56:07,365 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 6274e512daae1bed1f549f8d51e72d0b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 10:56:07,377 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/.tmp/info/fb996c45c76e44df83806fdc7a56189a 2023-05-31 10:56:07,385 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/.tmp/info/fb996c45c76e44df83806fdc7a56189a as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/info/fb996c45c76e44df83806fdc7a56189a 2023-05-31 10:56:07,393 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/info/fb996c45c76e44df83806fdc7a56189a, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 10:56:07,394 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6274e512daae1bed1f549f8d51e72d0b in 29ms, sequenceid=6, compaction requested=false 2023-05-31 10:56:07,394 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 6274e512daae1bed1f549f8d51e72d0b: 2023-05-31 10:56:07,394 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:07,395 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 10:56:07,395 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 10:56:07,395 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,395 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-31 10:56:07,395 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-31 10:56:07,396 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,396 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:07,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:07,396 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,397 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 10:56:07,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:07,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:07,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 10:56:07,398 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,398 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:07,398 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-31 10:56:07,399 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-31 10:56:07,399 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5400b513[Count = 0] remaining members to acquire global barrier 2023-05-31 10:56:07,399 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,400 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,400 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,400 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,400 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-31 10:56:07,400 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-31 10:56:07,400 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase20.apache.org,39533,1685530556178' in zk 2023-05-31 10:56:07,400 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,400 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 10:56:07,401 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-31 10:56:07,401 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:07,401 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,401 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-31 10:56:07,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:07,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:07,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:07,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:07,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 10:56:07,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:07,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 10:56:07,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase20.apache.org,39533,1685530556178': 2023-05-31 10:56:07,404 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-31 10:56:07,404 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-31 10:56:07,404 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 10:56:07,404 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 10:56:07,404 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-31 10:56:07,404 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 10:56:07,406 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,406 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:07,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:07,406 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,406 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:07,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:07,407 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,407 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:07,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:07,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 10:56:07,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:07,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 10:56:07,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,408 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:07,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 10:56:07,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 10:56:07,419 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:07,419 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:07,419 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,419 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:07,420 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 10:56:07,420 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:07,421 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 10:56:07,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-31 10:56:07,421 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:07,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 10:56:07,423 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-31 10:56:07,423 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 10:56:17,423 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 10:56:17,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 10:56:17,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-31 10:56:17,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,450 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:17,450 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:17,451 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 10:56:17,451 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 10:56:17,451 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,451 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,452 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,452 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:17,453 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:17,453 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:17,453 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,453 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 10:56:17,453 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,453 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,454 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 10:56:17,454 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,454 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,454 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,454 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 10:56:17,454 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:17,455 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 10:56:17,455 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 10:56:17,455 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 10:56:17,455 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:17,455 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. started... 2023-05-31 10:56:17,456 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 46845520ed600006d81badfba6c65c5d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:56:17,471 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/4d8facc983ff406faf148e02b82d41c6 2023-05-31 10:56:17,483 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/4d8facc983ff406faf148e02b82d41c6 as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4d8facc983ff406faf148e02b82d41c6 2023-05-31 10:56:17,489 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4d8facc983ff406faf148e02b82d41c6, entries=1, sequenceid=5, filesize=5.8 K 2023-05-31 10:56:17,490 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 46845520ed600006d81badfba6c65c5d in 34ms, sequenceid=5, compaction requested=false 2023-05-31 10:56:17,491 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:56:17,491 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:17,491 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 10:56:17,491 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 10:56:17,491 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,491 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 10:56:17,491 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 10:56:17,492 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,492 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:17,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:17,493 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,493 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 10:56:17,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:17,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:17,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,494 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:17,494 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 10:56:17,494 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@503f2b4b[Count = 0] remaining members to acquire global barrier 2023-05-31 10:56:17,494 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 10:56:17,494 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,495 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,495 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,495 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,495 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 10:56:17,495 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 10:56:17,495 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39533,1685530556178' in zk 2023-05-31 10:56:17,495 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,495 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 10:56:17,496 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,496 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 10:56:17,496 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:17,496 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 10:56:17,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:17,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:17,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:17,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:17,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,497 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:17,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39533,1685530556178': 2023-05-31 10:56:17,499 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 10:56:17,499 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 10:56:17,499 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 10:56:17,499 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 10:56:17,499 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,499 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 10:56:17,500 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,500 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:17,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:17,500 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,500 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:17,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:17,501 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,501 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:17,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:17,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:17,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,502 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,502 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,502 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:17,502 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,503 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,505 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:17,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:17,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:17,505 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 10:56:17,505 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:17,505 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 10:56:17,506 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 10:56:17,505 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,506 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,506 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:17,506 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:17,506 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:27,506 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 10:56:27,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 10:56:27,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-31 10:56:27,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 10:56:27,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,523 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:27,523 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:27,524 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 10:56:27,524 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 10:56:27,524 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,524 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,525 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:27,525 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,525 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:27,525 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:27,526 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,526 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 10:56:27,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,526 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 10:56:27,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,526 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,526 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 10:56:27,527 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,527 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 10:56:27,527 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:27,527 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 10:56:27,527 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 10:56:27,527 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 10:56:27,527 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:27,527 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. started... 2023-05-31 10:56:27,528 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 46845520ed600006d81badfba6c65c5d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:56:27,537 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/8353b7e21dc34e87838617e4115f27ff 2023-05-31 10:56:27,545 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/8353b7e21dc34e87838617e4115f27ff as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/8353b7e21dc34e87838617e4115f27ff 2023-05-31 10:56:27,551 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/8353b7e21dc34e87838617e4115f27ff, entries=1, sequenceid=9, filesize=5.8 K 2023-05-31 10:56:27,552 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 46845520ed600006d81badfba6c65c5d in 24ms, sequenceid=9, compaction requested=false 2023-05-31 10:56:27,552 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:56:27,552 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:27,552 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 10:56:27,552 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 10:56:27,552 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,552 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 10:56:27,552 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 10:56:27,554 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,554 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:27,554 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:27,554 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,554 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 10:56:27,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:27,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:27,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,555 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:27,556 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 10:56:27,556 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@153dbf28[Count = 0] remaining members to acquire global barrier 2023-05-31 10:56:27,556 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 10:56:27,556 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,556 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,556 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,556 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,557 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 10:56:27,557 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 10:56:27,557 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39533,1685530556178' in zk 2023-05-31 10:56:27,557 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,557 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 10:56:27,558 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 10:56:27,558 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,558 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:27,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:27,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:27,558 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 10:56:27,559 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:27,559 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:27,559 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,559 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,560 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:27,560 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,560 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,561 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39533,1685530556178': 2023-05-31 10:56:27,561 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 10:56:27,561 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 10:56:27,561 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 10:56:27,561 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 10:56:27,561 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,561 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 10:56:27,562 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,562 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:27,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:27,562 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,562 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,562 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:27,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:27,563 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:27,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:27,563 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,563 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:27,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,564 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:27,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,567 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:27,568 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:27,568 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:27,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:27,568 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:27,569 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:27,569 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:27,569 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 10:56:27,569 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:27,569 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 10:56:27,569 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:27,569 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:37,569 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 10:56:37,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 10:56:37,584 INFO [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530556560 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530597572 2023-05-31 10:56:37,584 DEBUG [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45449,DS-ff7982ed-e36e-413a-947d-c86db0873a3d,DISK], DatanodeInfoWithStorage[127.0.0.1:37191,DS-526f11f0-1168-4dfa-96bb-a967f5afcec2,DISK]] 2023-05-31 10:56:37,584 DEBUG [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530556560 is not closed yet, will try archiving it next time 2023-05-31 10:56:37,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-31 10:56:37,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 10:56:37,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,593 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:37,593 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:37,593 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 10:56:37,593 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 10:56:37,594 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,594 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,595 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,595 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:37,595 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:37,595 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:37,596 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,596 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 10:56:37,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,596 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 10:56:37,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,597 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,597 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 10:56:37,597 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,597 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 10:56:37,597 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:37,597 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 10:56:37,598 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 10:56:37,598 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 10:56:37,598 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:37,598 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. started... 2023-05-31 10:56:37,598 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 46845520ed600006d81badfba6c65c5d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:56:37,614 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/47a0536a0b534fae9e0fa291e4aac55e 2023-05-31 10:56:37,623 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/47a0536a0b534fae9e0fa291e4aac55e as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/47a0536a0b534fae9e0fa291e4aac55e 2023-05-31 10:56:37,629 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/47a0536a0b534fae9e0fa291e4aac55e, entries=1, sequenceid=13, filesize=5.8 K 2023-05-31 10:56:37,630 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 46845520ed600006d81badfba6c65c5d in 32ms, sequenceid=13, compaction requested=true 2023-05-31 10:56:37,630 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:56:37,630 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:37,630 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 10:56:37,631 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 10:56:37,631 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,631 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 10:56:37,631 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 10:56:37,634 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,634 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,634 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,634 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:37,634 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:37,635 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,635 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 10:56:37,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:37,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:37,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,636 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:37,636 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 10:56:37,636 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@759bf32c[Count = 0] remaining members to acquire global barrier 2023-05-31 10:56:37,636 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 10:56:37,636 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,637 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,637 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,637 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,637 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 10:56:37,637 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 10:56:37,637 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,637 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 10:56:37,637 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39533,1685530556178' in zk 2023-05-31 10:56:37,638 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,638 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 10:56:37,638 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:37,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:37,638 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:37,639 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 10:56:37,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:37,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:37,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:37,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,642 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39533,1685530556178': 2023-05-31 10:56:37,642 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 10:56:37,642 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 10:56:37,642 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 10:56:37,642 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 10:56:37,642 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,642 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 10:56:37,643 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,643 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,643 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,643 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:37,643 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:37,643 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,643 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,643 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:37,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:37,644 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:37,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:37,644 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,644 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:37,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,645 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,645 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:37,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,651 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:37,651 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,651 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:37,651 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,651 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:37,651 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:37,651 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 10:56:37,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:37,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 10:56:37,651 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,651 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:37,652 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:37,652 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:37,651 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:37,652 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 10:56:37,652 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,652 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 10:56:37,653 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:37,653 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,653 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 10:56:47,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 10:56:47,657 DEBUG [Listener at localhost.localdomain/34053] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:56:47,667 DEBUG [Listener at localhost.localdomain/34053] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:56:47,667 DEBUG [Listener at localhost.localdomain/34053] regionserver.HStore(1912): 46845520ed600006d81badfba6c65c5d/info is initiating minor compaction (all files) 2023-05-31 10:56:47,667 INFO [Listener at localhost.localdomain/34053] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:56:47,667 INFO [Listener at localhost.localdomain/34053] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:47,667 INFO [Listener at localhost.localdomain/34053] regionserver.HRegion(2259): Starting compaction of 46845520ed600006d81badfba6c65c5d/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:47,668 INFO [Listener at localhost.localdomain/34053] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4d8facc983ff406faf148e02b82d41c6, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/8353b7e21dc34e87838617e4115f27ff, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/47a0536a0b534fae9e0fa291e4aac55e] into tmpdir=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp, totalSize=17.4 K 2023-05-31 10:56:47,668 DEBUG [Listener at localhost.localdomain/34053] compactions.Compactor(207): Compacting 4d8facc983ff406faf148e02b82d41c6, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685530577440 2023-05-31 10:56:47,669 DEBUG [Listener at localhost.localdomain/34053] compactions.Compactor(207): Compacting 8353b7e21dc34e87838617e4115f27ff, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685530587510 2023-05-31 10:56:47,669 DEBUG [Listener at localhost.localdomain/34053] compactions.Compactor(207): Compacting 47a0536a0b534fae9e0fa291e4aac55e, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685530597571 2023-05-31 10:56:47,682 INFO [Listener at localhost.localdomain/34053] throttle.PressureAwareThroughputController(145): 46845520ed600006d81badfba6c65c5d#info#compaction#19 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:56:47,698 DEBUG [Listener at localhost.localdomain/34053] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/4579937c42b149919a0119651dab635e as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4579937c42b149919a0119651dab635e 2023-05-31 10:56:47,707 INFO [Listener at localhost.localdomain/34053] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 46845520ed600006d81badfba6c65c5d/info of 46845520ed600006d81badfba6c65c5d into 4579937c42b149919a0119651dab635e(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:56:47,707 DEBUG [Listener at localhost.localdomain/34053] regionserver.HRegion(2289): Compaction status journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:56:47,722 INFO [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530597572 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530607709 2023-05-31 10:56:47,722 DEBUG [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37191,DS-526f11f0-1168-4dfa-96bb-a967f5afcec2,DISK], DatanodeInfoWithStorage[127.0.0.1:45449,DS-ff7982ed-e36e-413a-947d-c86db0873a3d,DISK]] 2023-05-31 10:56:47,722 DEBUG [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530597572 is not closed yet, will try archiving it next time 2023-05-31 10:56:47,722 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530556560 to hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/oldWALs/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530556560 2023-05-31 10:56:47,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-31 10:56:47,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 10:56:47,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,731 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:47,731 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:47,731 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 10:56:47,732 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 10:56:47,732 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,732 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,734 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,734 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:47,734 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:47,734 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:47,734 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,734 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,734 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 10:56:47,734 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,735 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 10:56:47,735 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,735 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,735 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 10:56:47,735 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,735 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 10:56:47,735 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 10:56:47,735 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 10:56:47,736 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 10:56:47,736 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 10:56:47,736 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:47,736 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. started... 2023-05-31 10:56:47,736 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 46845520ed600006d81badfba6c65c5d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:56:47,751 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/afb82627d984443d9016929bff8580f0 2023-05-31 10:56:47,757 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/afb82627d984443d9016929bff8580f0 as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/afb82627d984443d9016929bff8580f0 2023-05-31 10:56:47,762 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/afb82627d984443d9016929bff8580f0, entries=1, sequenceid=18, filesize=5.8 K 2023-05-31 10:56:47,763 INFO [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 46845520ed600006d81badfba6c65c5d in 27ms, sequenceid=18, compaction requested=false 2023-05-31 10:56:47,763 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:56:47,763 DEBUG [rs(jenkins-hbase20.apache.org,39533,1685530556178)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:47,763 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 10:56:47,764 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 10:56:47,764 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,764 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 10:56:47,764 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 10:56:47,767 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,767 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,767 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,767 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:47,767 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:47,768 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,768 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 10:56:47,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:47,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:47,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:47,769 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39533,1685530556178' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 10:56:47,769 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@24e268a1[Count = 0] remaining members to acquire global barrier 2023-05-31 10:56:47,769 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 10:56:47,769 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,770 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,770 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,770 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 10:56:47,770 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,770 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 10:56:47,770 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 10:56:47,770 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39533,1685530556178' in zk 2023-05-31 10:56:47,772 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 10:56:47,772 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,772 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:47,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:47,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:47,772 DEBUG [member: 'jenkins-hbase20.apache.org,39533,1685530556178' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 10:56:47,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:47,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:47,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:47,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,775 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39533,1685530556178': 2023-05-31 10:56:47,775 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39533,1685530556178' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 10:56:47,775 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 10:56:47,775 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 10:56:47,775 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 10:56:47,775 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,775 INFO [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 10:56:47,776 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,776 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:47,776 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,776 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,776 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,776 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,776 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 10:56:47,776 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 10:56:47,777 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:47,777 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:47,777 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,777 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 10:56:47,777 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,777 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,777 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 10:56:47,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,778 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 10:56:47,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 10:56:47,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 10:56:47,781 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,781 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 10:56:47,781 DEBUG [(jenkins-hbase20.apache.org,35127,1685530556142)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 10:56:47,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:47,781 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:47,782 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 10:56:47,782 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,782 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,782 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 10:56:47,782 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 10:56:47,782 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:57,782 DEBUG [Listener at localhost.localdomain/34053] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 10:56:57,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35127] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 10:56:57,801 INFO [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530607709 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530617790 2023-05-31 10:56:57,801 DEBUG [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45449,DS-ff7982ed-e36e-413a-947d-c86db0873a3d,DISK], DatanodeInfoWithStorage[127.0.0.1:37191,DS-526f11f0-1168-4dfa-96bb-a967f5afcec2,DISK]] 2023-05-31 10:56:57,801 DEBUG [Listener at localhost.localdomain/34053] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530607709 is not closed yet, will try archiving it next time 2023-05-31 10:56:57,801 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530597572 to hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/oldWALs/jenkins-hbase20.apache.org%2C39533%2C1685530556178.1685530597572 2023-05-31 10:56:57,801 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 10:56:57,801 INFO [Listener at localhost.localdomain/34053] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 10:56:57,802 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x72a2ab3b to 127.0.0.1:49620 2023-05-31 10:56:57,803 DEBUG [Listener at localhost.localdomain/34053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:56:57,803 DEBUG [Listener at localhost.localdomain/34053] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 10:56:57,803 DEBUG [Listener at localhost.localdomain/34053] util.JVMClusterUtil(257): Found active master hash=1863338451, stopped=false 2023-05-31 10:56:57,803 INFO [Listener at localhost.localdomain/34053] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:56:57,805 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:56:57,805 INFO [Listener at localhost.localdomain/34053] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 10:56:57,805 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:56:57,806 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:57,806 DEBUG [Listener at localhost.localdomain/34053] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x108876a2 to 127.0.0.1:49620 2023-05-31 10:56:57,806 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:56:57,806 DEBUG [Listener at localhost.localdomain/34053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:56:57,806 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:56:57,806 INFO [Listener at localhost.localdomain/34053] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39533,1685530556178' ***** 2023-05-31 10:56:57,806 INFO [Listener at localhost.localdomain/34053] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:56:57,807 INFO [RS:0;jenkins-hbase20:39533] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:56:57,807 INFO [RS:0;jenkins-hbase20:39533] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:56:57,807 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:56:57,807 INFO [RS:0;jenkins-hbase20:39533] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:56:57,807 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(3303): Received CLOSE for 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(3303): Received CLOSE for 46845520ed600006d81badfba6c65c5d 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:57,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 6274e512daae1bed1f549f8d51e72d0b, disabling compactions & flushes 2023-05-31 10:56:57,808 DEBUG [RS:0;jenkins-hbase20:39533] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x46dd1ae1 to 127.0.0.1:49620 2023-05-31 10:56:57,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:57,808 DEBUG [RS:0;jenkins-hbase20:39533] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:56:57,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:56:57,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. after waiting 0 ms 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:56:57,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:57,808 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 10:56:57,808 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1478): Online Regions={6274e512daae1bed1f549f8d51e72d0b=hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b., 46845520ed600006d81badfba6c65c5d=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 10:56:57,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:56:57,809 DEBUG [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1504): Waiting on 1588230740, 46845520ed600006d81badfba6c65c5d, 6274e512daae1bed1f549f8d51e72d0b 2023-05-31 10:56:57,809 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:56:57,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:56:57,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:56:57,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:56:57,809 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-31 10:56:57,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/namespace/6274e512daae1bed1f549f8d51e72d0b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 10:56:57,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:57,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 6274e512daae1bed1f549f8d51e72d0b: 2023-05-31 10:56:57,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685530556772.6274e512daae1bed1f549f8d51e72d0b. 2023-05-31 10:56:57,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 46845520ed600006d81badfba6c65c5d, disabling compactions & flushes 2023-05-31 10:56:57,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:57,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:57,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. after waiting 0 ms 2023-05-31 10:56:57,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:57,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 46845520ed600006d81badfba6c65c5d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:56:57,822 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/.tmp/info/7c055f29f777470cab0c45556d8f2f93 2023-05-31 10:56:57,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/49e74f410f01475d84172b9a61ba22e3 2023-05-31 10:56:57,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/.tmp/info/49e74f410f01475d84172b9a61ba22e3 as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/49e74f410f01475d84172b9a61ba22e3 2023-05-31 10:56:57,842 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/.tmp/table/86cbc959f83f4af2b2b61746459c7245 2023-05-31 10:56:57,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/49e74f410f01475d84172b9a61ba22e3, entries=1, sequenceid=22, filesize=5.8 K 2023-05-31 10:56:57,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 46845520ed600006d81badfba6c65c5d in 29ms, sequenceid=22, compaction requested=true 2023-05-31 10:56:57,850 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4d8facc983ff406faf148e02b82d41c6, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/8353b7e21dc34e87838617e4115f27ff, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/47a0536a0b534fae9e0fa291e4aac55e] to archive 2023-05-31 10:56:57,851 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 10:56:57,852 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/.tmp/info/7c055f29f777470cab0c45556d8f2f93 as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/info/7c055f29f777470cab0c45556d8f2f93 2023-05-31 10:56:57,853 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4d8facc983ff406faf148e02b82d41c6 to hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/4d8facc983ff406faf148e02b82d41c6 2023-05-31 10:56:57,854 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/8353b7e21dc34e87838617e4115f27ff to hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/8353b7e21dc34e87838617e4115f27ff 2023-05-31 10:56:57,855 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/47a0536a0b534fae9e0fa291e4aac55e to hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/info/47a0536a0b534fae9e0fa291e4aac55e 2023-05-31 10:56:57,862 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/info/7c055f29f777470cab0c45556d8f2f93, entries=20, sequenceid=14, filesize=7.6 K 2023-05-31 10:56:57,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/46845520ed600006d81badfba6c65c5d/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-31 10:56:57,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:57,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 46845520ed600006d81badfba6c65c5d: 2023-05-31 10:56:57,863 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/.tmp/table/86cbc959f83f4af2b2b61746459c7245 as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/table/86cbc959f83f4af2b2b61746459c7245 2023-05-31 10:56:57,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685530557321.46845520ed600006d81badfba6c65c5d. 2023-05-31 10:56:57,868 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/table/86cbc959f83f4af2b2b61746459c7245, entries=4, sequenceid=14, filesize=4.9 K 2023-05-31 10:56:57,869 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 60ms, sequenceid=14, compaction requested=false 2023-05-31 10:56:57,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-31 10:56:57,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 10:56:57,876 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:56:57,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:56:57,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 10:56:58,009 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39533,1685530556178; all regions closed. 2023-05-31 10:56:58,010 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:58,021 DEBUG [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/oldWALs 2023-05-31 10:56:58,021 INFO [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C39533%2C1685530556178.meta:.meta(num 1685530556711) 2023-05-31 10:56:58,022 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/WALs/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:58,029 DEBUG [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/oldWALs 2023-05-31 10:56:58,029 INFO [RS:0;jenkins-hbase20:39533] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C39533%2C1685530556178:(num 1685530617790) 2023-05-31 10:56:58,029 DEBUG [RS:0;jenkins-hbase20:39533] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:56:58,029 INFO [RS:0;jenkins-hbase20:39533] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:56:58,029 INFO [RS:0;jenkins-hbase20:39533] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 10:56:58,029 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:56:58,030 INFO [RS:0;jenkins-hbase20:39533] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39533 2023-05-31 10:56:58,033 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:56:58,033 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,39533,1685530556178 2023-05-31 10:56:58,033 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:56:58,034 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,39533,1685530556178] 2023-05-31 10:56:58,034 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,39533,1685530556178; numProcessing=1 2023-05-31 10:56:58,035 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,39533,1685530556178 already deleted, retry=false 2023-05-31 10:56:58,035 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,39533,1685530556178 expired; onlineServers=0 2023-05-31 10:56:58,035 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,35127,1685530556142' ***** 2023-05-31 10:56:58,035 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 10:56:58,035 DEBUG [M:0;jenkins-hbase20:35127] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b2fd0e9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:56:58,035 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:56:58,035 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,35127,1685530556142; all regions closed. 2023-05-31 10:56:58,035 DEBUG [M:0;jenkins-hbase20:35127] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:56:58,036 DEBUG [M:0;jenkins-hbase20:35127] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 10:56:58,036 DEBUG [M:0;jenkins-hbase20:35127] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 10:56:58,036 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530556344] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530556344,5,FailOnTimeoutGroup] 2023-05-31 10:56:58,036 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530556345] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530556345,5,FailOnTimeoutGroup] 2023-05-31 10:56:58,036 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 10:56:58,036 INFO [M:0;jenkins-hbase20:35127] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 10:56:58,037 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 10:56:58,037 INFO [M:0;jenkins-hbase20:35127] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 10:56:58,037 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:58,037 INFO [M:0;jenkins-hbase20:35127] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-31 10:56:58,037 DEBUG [M:0;jenkins-hbase20:35127] master.HMaster(1512): Stopping service threads 2023-05-31 10:56:58,037 INFO [M:0;jenkins-hbase20:35127] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 10:56:58,037 ERROR [M:0;jenkins-hbase20:35127] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 10:56:58,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:56:58,038 INFO [M:0;jenkins-hbase20:35127] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 10:56:58,038 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 10:56:58,038 DEBUG [M:0;jenkins-hbase20:35127] zookeeper.ZKUtil(398): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 10:56:58,038 WARN [M:0;jenkins-hbase20:35127] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 10:56:58,038 INFO [M:0;jenkins-hbase20:35127] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 10:56:58,039 INFO [M:0;jenkins-hbase20:35127] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 10:56:58,039 DEBUG [M:0;jenkins-hbase20:35127] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:56:58,039 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:58,039 DEBUG [M:0;jenkins-hbase20:35127] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:58,039 DEBUG [M:0;jenkins-hbase20:35127] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:56:58,039 DEBUG [M:0;jenkins-hbase20:35127] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:58,039 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.93 KB heapSize=47.38 KB 2023-05-31 10:56:58,052 INFO [M:0;jenkins-hbase20:35127] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.93 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ba79aa77e2564f5dabeaf5adbffa09a4 2023-05-31 10:56:58,057 INFO [M:0;jenkins-hbase20:35127] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ba79aa77e2564f5dabeaf5adbffa09a4 2023-05-31 10:56:58,058 DEBUG [M:0;jenkins-hbase20:35127] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ba79aa77e2564f5dabeaf5adbffa09a4 as hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ba79aa77e2564f5dabeaf5adbffa09a4 2023-05-31 10:56:58,063 INFO [M:0;jenkins-hbase20:35127] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ba79aa77e2564f5dabeaf5adbffa09a4 2023-05-31 10:56:58,064 INFO [M:0;jenkins-hbase20:35127] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40915/user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ba79aa77e2564f5dabeaf5adbffa09a4, entries=11, sequenceid=100, filesize=6.1 K 2023-05-31 10:56:58,065 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegion(2948): Finished flush of dataSize ~38.93 KB/39866, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=100, compaction requested=false 2023-05-31 10:56:58,066 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:58,066 DEBUG [M:0;jenkins-hbase20:35127] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:56:58,066 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/52b03192-3f96-d191-aa30-4e9f335db0be/MasterData/WALs/jenkins-hbase20.apache.org,35127,1685530556142 2023-05-31 10:56:58,069 INFO [M:0;jenkins-hbase20:35127] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 10:56:58,069 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:56:58,069 INFO [M:0;jenkins-hbase20:35127] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:35127 2023-05-31 10:56:58,071 DEBUG [M:0;jenkins-hbase20:35127] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,35127,1685530556142 already deleted, retry=false 2023-05-31 10:56:58,134 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:56:58,134 INFO [RS:0;jenkins-hbase20:39533] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39533,1685530556178; zookeeper connection closed. 2023-05-31 10:56:58,134 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): regionserver:39533-0x101a1294f060001, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:56:58,135 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@680db44a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@680db44a 2023-05-31 10:56:58,135 INFO [Listener at localhost.localdomain/34053] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 10:56:58,234 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:56:58,234 INFO [M:0;jenkins-hbase20:35127] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,35127,1685530556142; zookeeper connection closed. 2023-05-31 10:56:58,235 DEBUG [Listener at localhost.localdomain/34053-EventThread] zookeeper.ZKWatcher(600): master:35127-0x101a1294f060000, quorum=127.0.0.1:49620, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:56:58,236 WARN [Listener at localhost.localdomain/34053] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:56:58,246 INFO [Listener at localhost.localdomain/34053] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:56:58,352 WARN [BP-45014488-148.251.75.209-1685530555677 heartbeating to localhost.localdomain/127.0.0.1:40915] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:56:58,352 WARN [BP-45014488-148.251.75.209-1685530555677 heartbeating to localhost.localdomain/127.0.0.1:40915] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-45014488-148.251.75.209-1685530555677 (Datanode Uuid b2fdce09-9317-4e29-b908-514e55b45abd) service to localhost.localdomain/127.0.0.1:40915 2023-05-31 10:56:58,354 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/dfs/data/data3/current/BP-45014488-148.251.75.209-1685530555677] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:56:58,355 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/dfs/data/data4/current/BP-45014488-148.251.75.209-1685530555677] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:56:58,356 WARN [Listener at localhost.localdomain/34053] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:56:58,360 INFO [Listener at localhost.localdomain/34053] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:56:58,441 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:56:58,468 WARN [BP-45014488-148.251.75.209-1685530555677 heartbeating to localhost.localdomain/127.0.0.1:40915] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:56:58,468 WARN [BP-45014488-148.251.75.209-1685530555677 heartbeating to localhost.localdomain/127.0.0.1:40915] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-45014488-148.251.75.209-1685530555677 (Datanode Uuid 128bd9b3-52b8-47f1-8c6d-51aa0160b2ae) service to localhost.localdomain/127.0.0.1:40915 2023-05-31 10:56:58,469 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/dfs/data/data1/current/BP-45014488-148.251.75.209-1685530555677] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:56:58,470 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/cluster_a7861085-8786-7f85-dcec-f79dc7948b7e/dfs/data/data2/current/BP-45014488-148.251.75.209-1685530555677] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:56:58,482 INFO [Listener at localhost.localdomain/34053] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 10:56:58,600 INFO [Listener at localhost.localdomain/34053] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 10:56:58,616 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 10:56:58,625 INFO [Listener at localhost.localdomain/34053] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=94 (was 88) - Thread LEAK? -, OpenFileDescriptor=498 (was 461) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=88 (was 139), ProcessCount=166 (was 166), AvailableMemoryMB=8207 (was 8576) 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=95, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=88, ProcessCount=166, AvailableMemoryMB=8207 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/hadoop.log.dir so I do NOT create it in target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/5a8b6b66-f527-b6c4-b261-834b6569ab19/hadoop.tmp.dir so I do NOT create it in target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6, deleteOnExit=true 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 10:56:58,633 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/test.cache.data in system properties and HBase conf 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/hadoop.log.dir in system properties and HBase conf 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 10:56:58,634 DEBUG [Listener at localhost.localdomain/34053] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:56:58,634 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/nfs.dump.dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/java.io.tmpdir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:56:58,635 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 10:56:58,636 INFO [Listener at localhost.localdomain/34053] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 10:56:58,637 WARN [Listener at localhost.localdomain/34053] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:56:58,638 WARN [Listener at localhost.localdomain/34053] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:56:58,638 WARN [Listener at localhost.localdomain/34053] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:56:58,661 WARN [Listener at localhost.localdomain/34053] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:56:58,663 INFO [Listener at localhost.localdomain/34053] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:56:58,667 INFO [Listener at localhost.localdomain/34053] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/java.io.tmpdir/Jetty_localhost_localdomain_35891_hdfs____.m42dmg/webapp 2023-05-31 10:56:58,739 INFO [Listener at localhost.localdomain/34053] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35891 2023-05-31 10:56:58,783 WARN [Listener at localhost.localdomain/34053] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:56:58,784 WARN [Listener at localhost.localdomain/34053] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:56:58,784 WARN [Listener at localhost.localdomain/34053] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:56:58,806 WARN [Listener at localhost.localdomain/45345] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:56:58,814 WARN [Listener at localhost.localdomain/45345] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:56:58,816 WARN [Listener at localhost.localdomain/45345] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:56:58,817 INFO [Listener at localhost.localdomain/45345] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:56:58,823 INFO [Listener at localhost.localdomain/45345] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/java.io.tmpdir/Jetty_localhost_42321_datanode____.lk0bz9/webapp 2023-05-31 10:56:58,897 INFO [Listener at localhost.localdomain/45345] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42321 2023-05-31 10:56:58,901 WARN [Listener at localhost.localdomain/35325] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:56:58,912 WARN [Listener at localhost.localdomain/35325] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:56:58,915 WARN [Listener at localhost.localdomain/35325] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:56:58,916 INFO [Listener at localhost.localdomain/35325] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:56:58,919 INFO [Listener at localhost.localdomain/35325] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/java.io.tmpdir/Jetty_localhost_33847_datanode____qfzqo2/webapp 2023-05-31 10:56:58,973 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3a4f3c300f2e6756: Processing first storage report for DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff from datanode e82f9b82-2071-4f5f-a83d-2024fe48b0d3 2023-05-31 10:56:58,973 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3a4f3c300f2e6756: from storage DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff node DatanodeRegistration(127.0.0.1:34255, datanodeUuid=e82f9b82-2071-4f5f-a83d-2024fe48b0d3, infoPort=34949, infoSecurePort=0, ipcPort=35325, storageInfo=lv=-57;cid=testClusterID;nsid=175175494;c=1685530618639), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:56:58,973 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3a4f3c300f2e6756: Processing first storage report for DS-3eaa531a-7515-418b-b52d-df6f02a27667 from datanode e82f9b82-2071-4f5f-a83d-2024fe48b0d3 2023-05-31 10:56:58,973 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3a4f3c300f2e6756: from storage DS-3eaa531a-7515-418b-b52d-df6f02a27667 node DatanodeRegistration(127.0.0.1:34255, datanodeUuid=e82f9b82-2071-4f5f-a83d-2024fe48b0d3, infoPort=34949, infoSecurePort=0, ipcPort=35325, storageInfo=lv=-57;cid=testClusterID;nsid=175175494;c=1685530618639), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:56:58,999 INFO [Listener at localhost.localdomain/35325] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33847 2023-05-31 10:56:59,006 WARN [Listener at localhost.localdomain/34183] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:56:59,066 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7804b00edca50fa: Processing first storage report for DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0 from datanode 3db69346-4937-4c0e-afa8-057c2023411c 2023-05-31 10:56:59,066 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7804b00edca50fa: from storage DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0 node DatanodeRegistration(127.0.0.1:33157, datanodeUuid=3db69346-4937-4c0e-afa8-057c2023411c, infoPort=33489, infoSecurePort=0, ipcPort=34183, storageInfo=lv=-57;cid=testClusterID;nsid=175175494;c=1685530618639), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:56:59,066 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7804b00edca50fa: Processing first storage report for DS-46ea2359-da50-468e-afdb-6b091d0caae0 from datanode 3db69346-4937-4c0e-afa8-057c2023411c 2023-05-31 10:56:59,066 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7804b00edca50fa: from storage DS-46ea2359-da50-468e-afdb-6b091d0caae0 node DatanodeRegistration(127.0.0.1:33157, datanodeUuid=3db69346-4937-4c0e-afa8-057c2023411c, infoPort=33489, infoSecurePort=0, ipcPort=34183, storageInfo=lv=-57;cid=testClusterID;nsid=175175494;c=1685530618639), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:56:59,117 DEBUG [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca 2023-05-31 10:56:59,121 INFO [Listener at localhost.localdomain/34183] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/zookeeper_0, clientPort=57094, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 10:56:59,123 INFO [Listener at localhost.localdomain/34183] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57094 2023-05-31 10:56:59,123 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,124 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,140 INFO [Listener at localhost.localdomain/34183] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9 with version=8 2023-05-31 10:56:59,140 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase-staging 2023-05-31 10:56:59,142 INFO [Listener at localhost.localdomain/34183] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:56:59,142 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:56:59,143 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:56:59,143 INFO [Listener at localhost.localdomain/34183] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:56:59,143 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:56:59,143 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:56:59,143 INFO [Listener at localhost.localdomain/34183] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:56:59,145 INFO [Listener at localhost.localdomain/34183] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37771 2023-05-31 10:56:59,145 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,146 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,147 INFO [Listener at localhost.localdomain/34183] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37771 connecting to ZooKeeper ensemble=127.0.0.1:57094 2023-05-31 10:56:59,152 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:377710x0, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:56:59,153 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37771-0x101a12a451b0000 connected 2023-05-31 10:56:59,170 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:56:59,171 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:56:59,172 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:56:59,172 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37771 2023-05-31 10:56:59,172 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37771 2023-05-31 10:56:59,172 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37771 2023-05-31 10:56:59,172 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37771 2023-05-31 10:56:59,173 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37771 2023-05-31 10:56:59,173 INFO [Listener at localhost.localdomain/34183] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9, hbase.cluster.distributed=false 2023-05-31 10:56:59,184 INFO [Listener at localhost.localdomain/34183] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:56:59,184 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:56:59,184 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:56:59,184 INFO [Listener at localhost.localdomain/34183] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:56:59,184 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:56:59,184 INFO [Listener at localhost.localdomain/34183] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:56:59,185 INFO [Listener at localhost.localdomain/34183] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:56:59,186 INFO [Listener at localhost.localdomain/34183] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36333 2023-05-31 10:56:59,186 INFO [Listener at localhost.localdomain/34183] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:56:59,187 DEBUG [Listener at localhost.localdomain/34183] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:56:59,188 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,189 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,190 INFO [Listener at localhost.localdomain/34183] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36333 connecting to ZooKeeper ensemble=127.0.0.1:57094 2023-05-31 10:56:59,192 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:363330x0, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:56:59,194 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ZKUtil(164): regionserver:363330x0, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:56:59,194 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36333-0x101a12a451b0001 connected 2023-05-31 10:56:59,194 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ZKUtil(164): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:56:59,195 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ZKUtil(164): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:56:59,196 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36333 2023-05-31 10:56:59,196 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36333 2023-05-31 10:56:59,196 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36333 2023-05-31 10:56:59,196 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36333 2023-05-31 10:56:59,197 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36333 2023-05-31 10:56:59,198 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,199 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:56:59,199 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,200 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:56:59,200 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:56:59,201 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:56:59,202 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,37771,1685530619142 from backup master directory 2023-05-31 10:56:59,202 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:56:59,203 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,203 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:56:59,203 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:56:59,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,218 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/hbase.id with ID: 6677a8b6-8f28-4cf2-b82b-e5a023139178 2023-05-31 10:56:59,229 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:56:59,231 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,239 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5e3c90bd to 127.0.0.1:57094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:56:59,245 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@763a4d49, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:56:59,245 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:56:59,246 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 10:56:59,247 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:56:59,248 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store-tmp 2023-05-31 10:56:59,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:56:59,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:56:59,254 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:59,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:59,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:56:59,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:59,254 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:56:59,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:56:59,255 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/WALs/jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,257 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37771%2C1685530619142, suffix=, logDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/WALs/jenkins-hbase20.apache.org,37771,1685530619142, archiveDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/oldWALs, maxLogs=10 2023-05-31 10:56:59,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/WALs/jenkins-hbase20.apache.org,37771,1685530619142/jenkins-hbase20.apache.org%2C37771%2C1685530619142.1685530619257 2023-05-31 10:56:59,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33157,DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0,DISK], DatanodeInfoWithStorage[127.0.0.1:34255,DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff,DISK]] 2023-05-31 10:56:59,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:56:59,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:56:59,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:56:59,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:56:59,263 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:56:59,265 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 10:56:59,265 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 10:56:59,265 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,266 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:56:59,266 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:56:59,268 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:56:59,271 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:56:59,272 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=724070, jitterRate=-0.07929849624633789}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:56:59,272 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:56:59,272 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 10:56:59,273 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 10:56:59,273 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 10:56:59,273 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 10:56:59,274 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 10:56:59,274 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 10:56:59,274 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 10:56:59,274 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 10:56:59,275 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 10:56:59,286 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 10:56:59,286 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 10:56:59,287 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 10:56:59,287 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 10:56:59,287 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 10:56:59,289 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,289 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 10:56:59,289 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 10:56:59,290 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 10:56:59,291 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:56:59,291 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:56:59,291 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,291 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,37771,1685530619142, sessionid=0x101a12a451b0000, setting cluster-up flag (Was=false) 2023-05-31 10:56:59,294 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,297 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 10:56:59,298 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,300 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,303 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 10:56:59,304 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:56:59,305 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.hbase-snapshot/.tmp 2023-05-31 10:56:59,307 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:56:59,308 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,309 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685530649309 2023-05-31 10:56:59,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 10:56:59,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 10:56:59,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 10:56:59,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 10:56:59,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 10:56:59,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 10:56:59,311 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:56:59,311 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 10:56:59,312 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:56:59,316 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,317 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 10:56:59,317 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 10:56:59,317 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 10:56:59,317 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 10:56:59,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 10:56:59,318 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530619318,5,FailOnTimeoutGroup] 2023-05-31 10:56:59,318 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530619318,5,FailOnTimeoutGroup] 2023-05-31 10:56:59,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 10:56:59,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,318 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,324 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:56:59,324 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:56:59,324 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9 2023-05-31 10:56:59,333 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:56:59,335 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:56:59,336 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info 2023-05-31 10:56:59,336 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:56:59,337 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,337 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:56:59,338 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:56:59,339 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:56:59,339 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,339 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:56:59,341 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/table 2023-05-31 10:56:59,341 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:56:59,342 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,342 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740 2023-05-31 10:56:59,343 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740 2023-05-31 10:56:59,345 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:56:59,346 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:56:59,348 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:56:59,348 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=794918, jitterRate=0.010791242122650146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:56:59,348 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:56:59,348 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:56:59,348 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:56:59,348 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:56:59,348 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:56:59,348 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:56:59,349 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:56:59,349 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:56:59,349 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:56:59,349 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 10:56:59,350 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 10:56:59,351 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 10:56:59,352 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 10:56:59,399 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(951): ClusterId : 6677a8b6-8f28-4cf2-b82b-e5a023139178 2023-05-31 10:56:59,401 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:56:59,404 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:56:59,404 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:56:59,407 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:56:59,409 DEBUG [RS:0;jenkins-hbase20:36333] zookeeper.ReadOnlyZKClient(139): Connect 0x639c4780 to 127.0.0.1:57094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:56:59,420 DEBUG [RS:0;jenkins-hbase20:36333] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d5bb583, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:56:59,420 DEBUG [RS:0;jenkins-hbase20:36333] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69642cf0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:56:59,436 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36333 2023-05-31 10:56:59,436 INFO [RS:0;jenkins-hbase20:36333] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:56:59,436 INFO [RS:0;jenkins-hbase20:36333] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:56:59,436 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:56:59,437 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,37771,1685530619142 with isa=jenkins-hbase20.apache.org/148.251.75.209:36333, startcode=1685530619184 2023-05-31 10:56:59,437 DEBUG [RS:0;jenkins-hbase20:36333] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:56:59,440 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55331, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:56:59,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,441 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9 2023-05-31 10:56:59,441 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:45345 2023-05-31 10:56:59,441 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:56:59,443 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:56:59,443 DEBUG [RS:0;jenkins-hbase20:36333] zookeeper.ZKUtil(162): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,443 WARN [RS:0;jenkins-hbase20:36333] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:56:59,443 INFO [RS:0;jenkins-hbase20:36333] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:56:59,444 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,444 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36333,1685530619184] 2023-05-31 10:56:59,448 DEBUG [RS:0;jenkins-hbase20:36333] zookeeper.ZKUtil(162): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,449 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:56:59,449 INFO [RS:0;jenkins-hbase20:36333] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:56:59,451 INFO [RS:0;jenkins-hbase20:36333] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:56:59,452 INFO [RS:0;jenkins-hbase20:36333] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:56:59,452 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,452 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:56:59,454 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,454 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,454 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,454 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,454 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,454 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,454 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:56:59,455 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,455 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,455 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,455 DEBUG [RS:0;jenkins-hbase20:36333] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:56:59,455 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,456 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,456 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,467 INFO [RS:0;jenkins-hbase20:36333] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:56:59,468 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36333,1685530619184-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,476 INFO [RS:0;jenkins-hbase20:36333] regionserver.Replication(203): jenkins-hbase20.apache.org,36333,1685530619184 started 2023-05-31 10:56:59,476 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36333,1685530619184, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36333, sessionid=0x101a12a451b0001 2023-05-31 10:56:59,476 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:56:59,476 DEBUG [RS:0;jenkins-hbase20:36333] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,476 DEBUG [RS:0;jenkins-hbase20:36333] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36333,1685530619184' 2023-05-31 10:56:59,476 DEBUG [RS:0;jenkins-hbase20:36333] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:56:59,477 DEBUG [RS:0;jenkins-hbase20:36333] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:56:59,477 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:56:59,477 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:56:59,477 DEBUG [RS:0;jenkins-hbase20:36333] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,477 DEBUG [RS:0;jenkins-hbase20:36333] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36333,1685530619184' 2023-05-31 10:56:59,477 DEBUG [RS:0;jenkins-hbase20:36333] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:56:59,478 DEBUG [RS:0;jenkins-hbase20:36333] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:56:59,478 DEBUG [RS:0;jenkins-hbase20:36333] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:56:59,478 INFO [RS:0;jenkins-hbase20:36333] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:56:59,478 INFO [RS:0;jenkins-hbase20:36333] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:56:59,503 DEBUG [jenkins-hbase20:37771] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 10:56:59,503 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36333,1685530619184, state=OPENING 2023-05-31 10:56:59,504 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 10:56:59,505 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:56:59,506 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36333,1685530619184}] 2023-05-31 10:56:59,506 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:56:59,581 INFO [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36333%2C1685530619184, suffix=, logDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184, archiveDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/oldWALs, maxLogs=32 2023-05-31 10:56:59,595 INFO [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530619582 2023-05-31 10:56:59,595 DEBUG [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34255,DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff,DISK], DatanodeInfoWithStorage[127.0.0.1:33157,DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0,DISK]] 2023-05-31 10:56:59,660 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,660 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:56:59,663 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:56:59,668 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 10:56:59,668 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:56:59,672 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36333%2C1685530619184.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184, archiveDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/oldWALs, maxLogs=32 2023-05-31 10:56:59,680 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.meta.1685530619673.meta 2023-05-31 10:56:59,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34255,DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff,DISK], DatanodeInfoWithStorage[127.0.0.1:33157,DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0,DISK]] 2023-05-31 10:56:59,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:56:59,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 10:56:59,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 10:56:59,680 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 10:56:59,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 10:56:59,680 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:56:59,681 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 10:56:59,681 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 10:56:59,682 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:56:59,683 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info 2023-05-31 10:56:59,683 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info 2023-05-31 10:56:59,683 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:56:59,684 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,684 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:56:59,685 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:56:59,685 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:56:59,685 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:56:59,686 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,686 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:56:59,686 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/table 2023-05-31 10:56:59,687 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/table 2023-05-31 10:56:59,687 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:56:59,687 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:56:59,688 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740 2023-05-31 10:56:59,690 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740 2023-05-31 10:56:59,693 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:56:59,695 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:56:59,696 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=881663, jitterRate=0.12109369039535522}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:56:59,696 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:56:59,698 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685530619660 2023-05-31 10:56:59,701 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 10:56:59,702 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 10:56:59,703 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36333,1685530619184, state=OPEN 2023-05-31 10:56:59,704 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 10:56:59,704 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:56:59,706 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 10:56:59,706 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36333,1685530619184 in 198 msec 2023-05-31 10:56:59,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 10:56:59,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 356 msec 2023-05-31 10:56:59,710 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 404 msec 2023-05-31 10:56:59,710 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685530619710, completionTime=-1 2023-05-31 10:56:59,710 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 10:56:59,710 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 10:56:59,713 DEBUG [hconnection-0x650cfcf7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:56:59,715 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36168, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:56:59,717 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 10:56:59,717 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685530679717 2023-05-31 10:56:59,717 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685530739717 2023-05-31 10:56:59,717 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37771,1685530619142-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37771,1685530619142-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37771,1685530619142-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:37771, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 10:56:59,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:56:59,723 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 10:56:59,724 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 10:56:59,725 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:56:59,726 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:56:59,728 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d 2023-05-31 10:56:59,728 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d empty. 2023-05-31 10:56:59,729 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d 2023-05-31 10:56:59,729 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 10:56:59,738 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 10:56:59,739 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ac565024d7501960057caf2cf4ed562d, NAME => 'hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp 2023-05-31 10:56:59,749 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:56:59,749 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ac565024d7501960057caf2cf4ed562d, disabling compactions & flushes 2023-05-31 10:56:59,749 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:56:59,749 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:56:59,749 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. after waiting 0 ms 2023-05-31 10:56:59,749 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:56:59,749 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:56:59,749 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ac565024d7501960057caf2cf4ed562d: 2023-05-31 10:56:59,752 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:56:59,753 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530619753"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530619753"}]},"ts":"1685530619753"} 2023-05-31 10:56:59,755 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:56:59,756 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:56:59,756 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530619756"}]},"ts":"1685530619756"} 2023-05-31 10:56:59,757 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 10:56:59,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ac565024d7501960057caf2cf4ed562d, ASSIGN}] 2023-05-31 10:56:59,763 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ac565024d7501960057caf2cf4ed562d, ASSIGN 2023-05-31 10:56:59,764 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ac565024d7501960057caf2cf4ed562d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36333,1685530619184; forceNewPlan=false, retain=false 2023-05-31 10:56:59,915 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ac565024d7501960057caf2cf4ed562d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:56:59,915 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530619915"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530619915"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530619915"}]},"ts":"1685530619915"} 2023-05-31 10:56:59,917 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure ac565024d7501960057caf2cf4ed562d, server=jenkins-hbase20.apache.org,36333,1685530619184}] 2023-05-31 10:57:00,077 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:57:00,077 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ac565024d7501960057caf2cf4ed562d, NAME => 'hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:57:00,077 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,078 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:57:00,078 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,078 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,081 INFO [StoreOpener-ac565024d7501960057caf2cf4ed562d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,083 DEBUG [StoreOpener-ac565024d7501960057caf2cf4ed562d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/info 2023-05-31 10:57:00,083 DEBUG [StoreOpener-ac565024d7501960057caf2cf4ed562d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/info 2023-05-31 10:57:00,084 INFO [StoreOpener-ac565024d7501960057caf2cf4ed562d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ac565024d7501960057caf2cf4ed562d columnFamilyName info 2023-05-31 10:57:00,085 INFO [StoreOpener-ac565024d7501960057caf2cf4ed562d-1] regionserver.HStore(310): Store=ac565024d7501960057caf2cf4ed562d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:57:00,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for ac565024d7501960057caf2cf4ed562d 2023-05-31 10:57:00,093 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:57:00,094 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened ac565024d7501960057caf2cf4ed562d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=783649, jitterRate=-0.003539159893989563}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:57:00,094 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for ac565024d7501960057caf2cf4ed562d: 2023-05-31 10:57:00,096 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d., pid=6, masterSystemTime=1685530620070 2023-05-31 10:57:00,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:57:00,099 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:57:00,099 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ac565024d7501960057caf2cf4ed562d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:00,099 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530620099"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530620099"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530620099"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530620099"}]},"ts":"1685530620099"} 2023-05-31 10:57:00,104 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 10:57:00,104 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure ac565024d7501960057caf2cf4ed562d, server=jenkins-hbase20.apache.org,36333,1685530619184 in 184 msec 2023-05-31 10:57:00,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 10:57:00,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ac565024d7501960057caf2cf4ed562d, ASSIGN in 343 msec 2023-05-31 10:57:00,107 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:57:00,108 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530620108"}]},"ts":"1685530620108"} 2023-05-31 10:57:00,109 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 10:57:00,111 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:57:00,113 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 388 msec 2023-05-31 10:57:00,125 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 10:57:00,128 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:57:00,128 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:57:00,132 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 10:57:00,142 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:57:00,146 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-31 10:57:00,155 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 10:57:00,165 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:57:00,171 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-05-31 10:57:00,180 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 10:57:00,182 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 10:57:00,182 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.979sec 2023-05-31 10:57:00,183 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 10:57:00,183 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 10:57:00,183 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 10:57:00,183 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37771,1685530619142-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 10:57:00,183 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37771,1685530619142-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 10:57:00,186 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 10:57:00,201 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ReadOnlyZKClient(139): Connect 0x5606cb04 to 127.0.0.1:57094 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:57:00,209 DEBUG [Listener at localhost.localdomain/34183] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@775b22a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:57:00,211 DEBUG [hconnection-0x24af27f8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:57:00,213 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:57:00,215 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:57:00,215 INFO [Listener at localhost.localdomain/34183] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:57:00,220 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 10:57:00,220 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:57:00,221 INFO [Listener at localhost.localdomain/34183] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 10:57:00,224 DEBUG [Listener at localhost.localdomain/34183] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 10:57:00,227 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38026, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 10:57:00,228 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 10:57:00,228 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 10:57:00,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:57:00,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-31 10:57:00,235 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:57:00,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-31 10:57:00,236 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:57:00,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:57:00,238 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,238 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d empty. 2023-05-31 10:57:00,239 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,239 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-31 10:57:00,249 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 10:57:00,250 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2eb3b6d0803f7bc80b97fbab5624c07d, NAME => 'TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/.tmp 2023-05-31 10:57:00,258 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:57:00,258 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 2eb3b6d0803f7bc80b97fbab5624c07d, disabling compactions & flushes 2023-05-31 10:57:00,258 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,258 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,258 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. after waiting 0 ms 2023-05-31 10:57:00,259 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,259 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,259 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:00,261 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:57:00,262 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685530620262"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530620262"}]},"ts":"1685530620262"} 2023-05-31 10:57:00,263 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:57:00,264 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:57:00,264 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530620264"}]},"ts":"1685530620264"} 2023-05-31 10:57:00,265 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-31 10:57:00,267 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, ASSIGN}] 2023-05-31 10:57:00,269 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, ASSIGN 2023-05-31 10:57:00,270 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36333,1685530619184; forceNewPlan=false, retain=false 2023-05-31 10:57:00,421 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=2eb3b6d0803f7bc80b97fbab5624c07d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:00,421 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685530620420"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530620420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530620420"}]},"ts":"1685530620420"} 2023-05-31 10:57:00,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 2eb3b6d0803f7bc80b97fbab5624c07d, server=jenkins-hbase20.apache.org,36333,1685530619184}] 2023-05-31 10:57:00,580 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2eb3b6d0803f7bc80b97fbab5624c07d, NAME => 'TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:57:00,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:57:00,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,582 INFO [StoreOpener-2eb3b6d0803f7bc80b97fbab5624c07d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,584 DEBUG [StoreOpener-2eb3b6d0803f7bc80b97fbab5624c07d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info 2023-05-31 10:57:00,584 DEBUG [StoreOpener-2eb3b6d0803f7bc80b97fbab5624c07d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info 2023-05-31 10:57:00,585 INFO [StoreOpener-2eb3b6d0803f7bc80b97fbab5624c07d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2eb3b6d0803f7bc80b97fbab5624c07d columnFamilyName info 2023-05-31 10:57:00,585 INFO [StoreOpener-2eb3b6d0803f7bc80b97fbab5624c07d-1] regionserver.HStore(310): Store=2eb3b6d0803f7bc80b97fbab5624c07d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:57:00,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:00,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:57:00,594 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2eb3b6d0803f7bc80b97fbab5624c07d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835288, jitterRate=0.0621245801448822}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:57:00,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:00,595 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d., pid=11, masterSystemTime=1685530620576 2023-05-31 10:57:00,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:00,598 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=2eb3b6d0803f7bc80b97fbab5624c07d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:00,599 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685530620598"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530620598"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530620598"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530620598"}]},"ts":"1685530620598"} 2023-05-31 10:57:00,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 10:57:00,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 2eb3b6d0803f7bc80b97fbab5624c07d, server=jenkins-hbase20.apache.org,36333,1685530619184 in 178 msec 2023-05-31 10:57:00,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 10:57:00,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, ASSIGN in 338 msec 2023-05-31 10:57:00,610 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:57:00,610 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530620610"}]},"ts":"1685530620610"} 2023-05-31 10:57:00,611 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-31 10:57:00,614 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:57:00,615 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 385 msec 2023-05-31 10:57:03,370 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 10:57:05,449 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 10:57:05,451 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 10:57:05,453 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-31 10:57:10,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37771] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 10:57:10,240 INFO [Listener at localhost.localdomain/34183] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-31 10:57:10,246 DEBUG [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-31 10:57:10,246 DEBUG [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:10,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:10,261 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2eb3b6d0803f7bc80b97fbab5624c07d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:57:10,275 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/2b14d229ba1345f38c8e7d3a497a1882 2023-05-31 10:57:10,283 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/2b14d229ba1345f38c8e7d3a497a1882 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/2b14d229ba1345f38c8e7d3a497a1882 2023-05-31 10:57:10,289 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/2b14d229ba1345f38c8e7d3a497a1882, entries=7, sequenceid=11, filesize=12.1 K 2023-05-31 10:57:10,290 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 2eb3b6d0803f7bc80b97fbab5624c07d in 29ms, sequenceid=11, compaction requested=false 2023-05-31 10:57:10,291 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:10,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:10,291 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2eb3b6d0803f7bc80b97fbab5624c07d 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-31 10:57:10,308 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=33 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/dac1216be0784279ae400d525e309a04 2023-05-31 10:57:10,314 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/dac1216be0784279ae400d525e309a04 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04 2023-05-31 10:57:10,319 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04, entries=19, sequenceid=33, filesize=24.7 K 2023-05-31 10:57:10,320 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=6.30 KB/6456 for 2eb3b6d0803f7bc80b97fbab5624c07d in 29ms, sequenceid=33, compaction requested=false 2023-05-31 10:57:10,320 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:10,320 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.9 K, sizeToCheck=16.0 K 2023-05-31 10:57:10,320 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:57:10,320 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04 because midkey is the same as first or last row 2023-05-31 10:57:12,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:12,304 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2eb3b6d0803f7bc80b97fbab5624c07d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:57:12,320 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=43 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/9733d5a031754af6833c3da5b18dd083 2023-05-31 10:57:12,327 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/9733d5a031754af6833c3da5b18dd083 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/9733d5a031754af6833c3da5b18dd083 2023-05-31 10:57:12,333 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/9733d5a031754af6833c3da5b18dd083, entries=7, sequenceid=43, filesize=12.1 K 2023-05-31 10:57:12,333 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=15.76 KB/16140 for 2eb3b6d0803f7bc80b97fbab5624c07d in 29ms, sequenceid=43, compaction requested=true 2023-05-31 10:57:12,333 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:12,334 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=49.0 K, sizeToCheck=16.0 K 2023-05-31 10:57:12,334 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:57:12,334 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04 because midkey is the same as first or last row 2023-05-31 10:57:12,334 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:12,334 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:57:12,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:12,335 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2eb3b6d0803f7bc80b97fbab5624c07d 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-31 10:57:12,336 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 50141 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:57:12,337 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 2eb3b6d0803f7bc80b97fbab5624c07d/info is initiating minor compaction (all files) 2023-05-31 10:57:12,337 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 2eb3b6d0803f7bc80b97fbab5624c07d/info in TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:12,337 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/2b14d229ba1345f38c8e7d3a497a1882, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/9733d5a031754af6833c3da5b18dd083] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp, totalSize=49.0 K 2023-05-31 10:57:12,337 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 2b14d229ba1345f38c8e7d3a497a1882, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685530630250 2023-05-31 10:57:12,338 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting dac1216be0784279ae400d525e309a04, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=33, earliestPutTs=1685530630262 2023-05-31 10:57:12,338 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 9733d5a031754af6833c3da5b18dd083, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685530630292 2023-05-31 10:57:12,350 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=63 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/a56cf02ee1d14674b2e8c585dc8cafc7 2023-05-31 10:57:12,354 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 2eb3b6d0803f7bc80b97fbab5624c07d#info#compaction#29 average throughput is 16.93 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:12,361 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/a56cf02ee1d14674b2e8c585dc8cafc7 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/a56cf02ee1d14674b2e8c585dc8cafc7 2023-05-31 10:57:12,361 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=2eb3b6d0803f7bc80b97fbab5624c07d, server=jenkins-hbase20.apache.org,36333,1685530619184 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 10:57:12,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] ipc.CallRunner(144): callId: 71 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:36176 deadline: 1685530642361, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=2eb3b6d0803f7bc80b97fbab5624c07d, server=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:12,374 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/a56cf02ee1d14674b2e8c585dc8cafc7, entries=17, sequenceid=63, filesize=22.6 K 2023-05-31 10:57:12,375 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=12.61 KB/12912 for 2eb3b6d0803f7bc80b97fbab5624c07d in 40ms, sequenceid=63, compaction requested=false 2023-05-31 10:57:12,375 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:12,375 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.6 K, sizeToCheck=16.0 K 2023-05-31 10:57:12,375 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:57:12,375 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04 because midkey is the same as first or last row 2023-05-31 10:57:12,376 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/95874fa09bdd4dc2bef1cf1dce7cb7c3 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3 2023-05-31 10:57:12,382 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 2eb3b6d0803f7bc80b97fbab5624c07d/info of 2eb3b6d0803f7bc80b97fbab5624c07d into 95874fa09bdd4dc2bef1cf1dce7cb7c3(size=39.6 K), total size for store is 62.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:12,383 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:12,383 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d., storeName=2eb3b6d0803f7bc80b97fbab5624c07d/info, priority=13, startTime=1685530632334; duration=0sec 2023-05-31 10:57:12,383 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=62.3 K, sizeToCheck=16.0 K 2023-05-31 10:57:12,383 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:57:12,384 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3 because midkey is the same as first or last row 2023-05-31 10:57:12,384 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:22,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,455 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 2eb3b6d0803f7bc80b97fbab5624c07d 1/1 column families, dataSize=13.66 KB heapSize=14.88 KB 2023-05-31 10:57:22,473 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=13.66 KB at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/b2a412c8711f4f5eaf6bcda026945d7e 2023-05-31 10:57:22,480 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/b2a412c8711f4f5eaf6bcda026945d7e as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/b2a412c8711f4f5eaf6bcda026945d7e 2023-05-31 10:57:22,486 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/b2a412c8711f4f5eaf6bcda026945d7e, entries=13, sequenceid=80, filesize=18.4 K 2023-05-31 10:57:22,487 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~13.66 KB/13988, heapSize ~14.86 KB/15216, currentSize=1.05 KB/1076 for 2eb3b6d0803f7bc80b97fbab5624c07d in 32ms, sequenceid=80, compaction requested=true 2023-05-31 10:57:22,487 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:22,487 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=80.7 K, sizeToCheck=16.0 K 2023-05-31 10:57:22,487 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:57:22,487 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3 because midkey is the same as first or last row 2023-05-31 10:57:22,487 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:22,487 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:57:22,488 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82626 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:57:22,488 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 2eb3b6d0803f7bc80b97fbab5624c07d/info is initiating minor compaction (all files) 2023-05-31 10:57:22,489 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 2eb3b6d0803f7bc80b97fbab5624c07d/info in TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:22,489 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/a56cf02ee1d14674b2e8c585dc8cafc7, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/b2a412c8711f4f5eaf6bcda026945d7e] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp, totalSize=80.7 K 2023-05-31 10:57:22,489 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 95874fa09bdd4dc2bef1cf1dce7cb7c3, keycount=33, bloomtype=ROW, size=39.6 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685530630250 2023-05-31 10:57:22,489 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting a56cf02ee1d14674b2e8c585dc8cafc7, keycount=17, bloomtype=ROW, size=22.6 K, encoding=NONE, compression=NONE, seqNum=63, earliestPutTs=1685530632306 2023-05-31 10:57:22,490 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting b2a412c8711f4f5eaf6bcda026945d7e, keycount=13, bloomtype=ROW, size=18.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685530632336 2023-05-31 10:57:22,500 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 2eb3b6d0803f7bc80b97fbab5624c07d#info#compaction#31 average throughput is 32.32 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:22,510 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/5fd0d091606a40ffba8281dd75b2448d as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d 2023-05-31 10:57:22,516 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 2eb3b6d0803f7bc80b97fbab5624c07d/info of 2eb3b6d0803f7bc80b97fbab5624c07d into 5fd0d091606a40ffba8281dd75b2448d(size=71.4 K), total size for store is 71.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:22,516 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:22,516 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d., storeName=2eb3b6d0803f7bc80b97fbab5624c07d/info, priority=13, startTime=1685530642487; duration=0sec 2023-05-31 10:57:22,516 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.4 K, sizeToCheck=16.0 K 2023-05-31 10:57:22,516 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 10:57:22,517 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:22,517 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:22,518 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] assignment.AssignmentManager(1140): Split request from jenkins-hbase20.apache.org,36333,1685530619184, parent={ENCODED => 2eb3b6d0803f7bc80b97fbab5624c07d, NAME => 'TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-31 10:57:22,524 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:22,529 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37771] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2eb3b6d0803f7bc80b97fbab5624c07d, daughterA=5c552d47aa2e590e40612dc2b820a3f7, daughterB=72200ad8310565f077bdbf7870786701 2023-05-31 10:57:22,530 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2eb3b6d0803f7bc80b97fbab5624c07d, daughterA=5c552d47aa2e590e40612dc2b820a3f7, daughterB=72200ad8310565f077bdbf7870786701 2023-05-31 10:57:22,530 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2eb3b6d0803f7bc80b97fbab5624c07d, daughterA=5c552d47aa2e590e40612dc2b820a3f7, daughterB=72200ad8310565f077bdbf7870786701 2023-05-31 10:57:22,530 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2eb3b6d0803f7bc80b97fbab5624c07d, daughterA=5c552d47aa2e590e40612dc2b820a3f7, daughterB=72200ad8310565f077bdbf7870786701 2023-05-31 10:57:22,539 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, UNASSIGN}] 2023-05-31 10:57:22,541 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, UNASSIGN 2023-05-31 10:57:22,542 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2eb3b6d0803f7bc80b97fbab5624c07d, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:22,542 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685530642542"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530642542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530642542"}]},"ts":"1685530642542"} 2023-05-31 10:57:22,544 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 2eb3b6d0803f7bc80b97fbab5624c07d, server=jenkins-hbase20.apache.org,36333,1685530619184}] 2023-05-31 10:57:22,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2eb3b6d0803f7bc80b97fbab5624c07d, disabling compactions & flushes 2023-05-31 10:57:22,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:22,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:22,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. after waiting 0 ms 2023-05-31 10:57:22,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:22,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 2eb3b6d0803f7bc80b97fbab5624c07d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:57:22,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=85 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/01c2bf575b9b4b30941e43455a755bfb 2023-05-31 10:57:22,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.tmp/info/01c2bf575b9b4b30941e43455a755bfb as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/01c2bf575b9b4b30941e43455a755bfb 2023-05-31 10:57:22,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/01c2bf575b9b4b30941e43455a755bfb, entries=1, sequenceid=85, filesize=5.8 K 2023-05-31 10:57:22,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 2eb3b6d0803f7bc80b97fbab5624c07d in 34ms, sequenceid=85, compaction requested=false 2023-05-31 10:57:22,743 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/2b14d229ba1345f38c8e7d3a497a1882, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/9733d5a031754af6833c3da5b18dd083, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/a56cf02ee1d14674b2e8c585dc8cafc7, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/b2a412c8711f4f5eaf6bcda026945d7e] to archive 2023-05-31 10:57:22,744 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 10:57:22,746 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/2b14d229ba1345f38c8e7d3a497a1882 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/2b14d229ba1345f38c8e7d3a497a1882 2023-05-31 10:57:22,748 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/dac1216be0784279ae400d525e309a04 2023-05-31 10:57:22,749 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/95874fa09bdd4dc2bef1cf1dce7cb7c3 2023-05-31 10:57:22,750 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/9733d5a031754af6833c3da5b18dd083 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/9733d5a031754af6833c3da5b18dd083 2023-05-31 10:57:22,751 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/a56cf02ee1d14674b2e8c585dc8cafc7 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/a56cf02ee1d14674b2e8c585dc8cafc7 2023-05-31 10:57:22,752 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/b2a412c8711f4f5eaf6bcda026945d7e to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/b2a412c8711f4f5eaf6bcda026945d7e 2023-05-31 10:57:22,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=1 2023-05-31 10:57:22,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. 2023-05-31 10:57:22,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2eb3b6d0803f7bc80b97fbab5624c07d: 2023-05-31 10:57:22,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,765 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2eb3b6d0803f7bc80b97fbab5624c07d, regionState=CLOSED 2023-05-31 10:57:22,766 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685530642765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530642765"}]},"ts":"1685530642765"} 2023-05-31 10:57:22,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-31 10:57:22,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 2eb3b6d0803f7bc80b97fbab5624c07d, server=jenkins-hbase20.apache.org,36333,1685530619184 in 227 msec 2023-05-31 10:57:22,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-31 10:57:22,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=2eb3b6d0803f7bc80b97fbab5624c07d, UNASSIGN in 234 msec 2023-05-31 10:57:22,788 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=2eb3b6d0803f7bc80b97fbab5624c07d, threads=2 2023-05-31 10:57:22,790 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/01c2bf575b9b4b30941e43455a755bfb for region: 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,791 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d for region: 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,799 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/01c2bf575b9b4b30941e43455a755bfb, top=true 2023-05-31 10:57:22,803 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/.splits/72200ad8310565f077bdbf7870786701/info/TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb for child: 72200ad8310565f077bdbf7870786701, parent: 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,803 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/01c2bf575b9b4b30941e43455a755bfb for region: 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,821 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d for region: 2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:57:22,821 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 2eb3b6d0803f7bc80b97fbab5624c07d Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-05-31 10:57:22,843 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-31 10:57:22,845 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-31 10:57:22,847 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685530642846"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685530642846"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685530642846"}]},"ts":"1685530642846"} 2023-05-31 10:57:22,847 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685530642846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530642846"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530642846"}]},"ts":"1685530642846"} 2023-05-31 10:57:22,847 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685530642846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530642846"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530642846"}]},"ts":"1685530642846"} 2023-05-31 10:57:22,881 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36333] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-31 10:57:22,881 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-31 10:57:22,882 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-31 10:57:22,890 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5c552d47aa2e590e40612dc2b820a3f7, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=72200ad8310565f077bdbf7870786701, ASSIGN}] 2023-05-31 10:57:22,891 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=72200ad8310565f077bdbf7870786701, ASSIGN 2023-05-31 10:57:22,892 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5c552d47aa2e590e40612dc2b820a3f7, ASSIGN 2023-05-31 10:57:22,892 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=72200ad8310565f077bdbf7870786701, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,36333,1685530619184; forceNewPlan=false, retain=false 2023-05-31 10:57:22,892 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5c552d47aa2e590e40612dc2b820a3f7, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,36333,1685530619184; forceNewPlan=false, retain=false 2023-05-31 10:57:22,893 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/.tmp/info/043a6561856543da965e74ce7354a879 2023-05-31 10:57:22,907 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/.tmp/table/22465ca7ac9844b8aec49508756ba15f 2023-05-31 10:57:22,913 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/.tmp/info/043a6561856543da965e74ce7354a879 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info/043a6561856543da965e74ce7354a879 2023-05-31 10:57:22,918 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info/043a6561856543da965e74ce7354a879, entries=29, sequenceid=17, filesize=8.6 K 2023-05-31 10:57:22,918 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/.tmp/table/22465ca7ac9844b8aec49508756ba15f as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/table/22465ca7ac9844b8aec49508756ba15f 2023-05-31 10:57:22,924 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/table/22465ca7ac9844b8aec49508756ba15f, entries=4, sequenceid=17, filesize=4.8 K 2023-05-31 10:57:22,925 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 43ms, sequenceid=17, compaction requested=false 2023-05-31 10:57:22,925 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 10:57:23,045 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=72200ad8310565f077bdbf7870786701, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:23,045 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=5c552d47aa2e590e40612dc2b820a3f7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:23,046 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685530643045"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530643045"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530643045"}]},"ts":"1685530643045"} 2023-05-31 10:57:23,046 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685530643045"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530643045"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530643045"}]},"ts":"1685530643045"} 2023-05-31 10:57:23,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184}] 2023-05-31 10:57:23,052 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 5c552d47aa2e590e40612dc2b820a3f7, server=jenkins-hbase20.apache.org,36333,1685530619184}] 2023-05-31 10:57:23,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:57:23,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c552d47aa2e590e40612dc2b820a3f7, NAME => 'TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-31 10:57:23,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:57:23,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,209 INFO [StoreOpener-5c552d47aa2e590e40612dc2b820a3f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,210 DEBUG [StoreOpener-5c552d47aa2e590e40612dc2b820a3f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info 2023-05-31 10:57:23,210 DEBUG [StoreOpener-5c552d47aa2e590e40612dc2b820a3f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info 2023-05-31 10:57:23,210 INFO [StoreOpener-5c552d47aa2e590e40612dc2b820a3f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c552d47aa2e590e40612dc2b820a3f7 columnFamilyName info 2023-05-31 10:57:23,224 DEBUG [StoreOpener-5c552d47aa2e590e40612dc2b820a3f7-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d->hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d-bottom 2023-05-31 10:57:23,225 INFO [StoreOpener-5c552d47aa2e590e40612dc2b820a3f7-1] regionserver.HStore(310): Store=5c552d47aa2e590e40612dc2b820a3f7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:57:23,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:57:23,231 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5c552d47aa2e590e40612dc2b820a3f7; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=704633, jitterRate=-0.10401324927806854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:57:23,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5c552d47aa2e590e40612dc2b820a3f7: 2023-05-31 10:57:23,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7., pid=18, masterSystemTime=1685530643204 2023-05-31 10:57:23,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:23,232 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-31 10:57:23,233 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:57:23,233 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 5c552d47aa2e590e40612dc2b820a3f7/info is initiating minor compaction (all files) 2023-05-31 10:57:23,233 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5c552d47aa2e590e40612dc2b820a3f7/info in TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:57:23,233 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d->hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d-bottom] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/.tmp, totalSize=71.4 K 2023-05-31 10:57:23,234 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685530630250 2023-05-31 10:57:23,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:57:23,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:57:23,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:23,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 72200ad8310565f077bdbf7870786701, NAME => 'TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-31 10:57:23,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:57:23,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,235 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=5c552d47aa2e590e40612dc2b820a3f7, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:23,235 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685530643234"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530643234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530643234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530643234"}]},"ts":"1685530643234"} 2023-05-31 10:57:23,236 INFO [StoreOpener-72200ad8310565f077bdbf7870786701-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,237 DEBUG [StoreOpener-72200ad8310565f077bdbf7870786701-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info 2023-05-31 10:57:23,237 DEBUG [StoreOpener-72200ad8310565f077bdbf7870786701-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info 2023-05-31 10:57:23,238 INFO [StoreOpener-72200ad8310565f077bdbf7870786701-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 72200ad8310565f077bdbf7870786701 columnFamilyName info 2023-05-31 10:57:23,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-05-31 10:57:23,240 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 5c552d47aa2e590e40612dc2b820a3f7, server=jenkins-hbase20.apache.org,36333,1685530619184 in 185 msec 2023-05-31 10:57:23,243 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5c552d47aa2e590e40612dc2b820a3f7, ASSIGN in 350 msec 2023-05-31 10:57:23,243 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5c552d47aa2e590e40612dc2b820a3f7#info#compaction#35 average throughput is 15.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:23,248 DEBUG [StoreOpener-72200ad8310565f077bdbf7870786701-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d->hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d-top 2023-05-31 10:57:23,256 DEBUG [StoreOpener-72200ad8310565f077bdbf7870786701-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb 2023-05-31 10:57:23,256 INFO [StoreOpener-72200ad8310565f077bdbf7870786701-1] regionserver.HStore(310): Store=72200ad8310565f077bdbf7870786701/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:57:23,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,258 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/.tmp/info/2b240c1581e145bd9e70c187353e0728 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info/2b240c1581e145bd9e70c187353e0728 2023-05-31 10:57:23,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:23,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 72200ad8310565f077bdbf7870786701; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=775431, jitterRate=-0.013988718390464783}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:57:23,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:23,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., pid=17, masterSystemTime=1685530643204 2023-05-31 10:57:23,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:23,264 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-05-31 10:57:23,266 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:23,266 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:57:23,266 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:23,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:23,266 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d->hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d-top, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=77.2 K 2023-05-31 10:57:23,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:23,267 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.Compactor(207): Compacting 5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1685530630250 2023-05-31 10:57:23,267 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=72200ad8310565f077bdbf7870786701, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:23,268 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 5c552d47aa2e590e40612dc2b820a3f7/info of 5c552d47aa2e590e40612dc2b820a3f7 into 2b240c1581e145bd9e70c187353e0728(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:23,268 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685530643267"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530643267"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530643267"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530643267"}]},"ts":"1685530643267"} 2023-05-31 10:57:23,268 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685530642457 2023-05-31 10:57:23,268 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5c552d47aa2e590e40612dc2b820a3f7: 2023-05-31 10:57:23,268 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7., storeName=5c552d47aa2e590e40612dc2b820a3f7/info, priority=15, startTime=1685530643232; duration=0sec 2023-05-31 10:57:23,268 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:23,271 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-05-31 10:57:23,271 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 in 219 msec 2023-05-31 10:57:23,273 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-05-31 10:57:23,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=72200ad8310565f077bdbf7870786701, ASSIGN in 381 msec 2023-05-31 10:57:23,275 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=2eb3b6d0803f7bc80b97fbab5624c07d, daughterA=5c552d47aa2e590e40612dc2b820a3f7, daughterB=72200ad8310565f077bdbf7870786701 in 750 msec 2023-05-31 10:57:23,276 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#36 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:23,288 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/ff5ae3c6c7e14120a6e5dae6a3cb60a1 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ff5ae3c6c7e14120a6e5dae6a3cb60a1 2023-05-31 10:57:23,295 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into ff5ae3c6c7e14120a6e5dae6a3cb60a1(size=8.1 K), total size for store is 8.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:23,295 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:23,295 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=14, startTime=1685530643262; duration=0sec 2023-05-31 10:57:23,295 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:24,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:36176 deadline: 1685530654461, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685530620228.2eb3b6d0803f7bc80b97fbab5624c07d. is not online on jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:28,322 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 10:57:34,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:34,508 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:57:34,517 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=99 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/e1bc496a2f6e4f9c89acf837e0874ffa 2023-05-31 10:57:34,523 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/e1bc496a2f6e4f9c89acf837e0874ffa as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e1bc496a2f6e4f9c89acf837e0874ffa 2023-05-31 10:57:34,528 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e1bc496a2f6e4f9c89acf837e0874ffa, entries=7, sequenceid=99, filesize=12.1 K 2023-05-31 10:57:34,529 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for 72200ad8310565f077bdbf7870786701 in 21ms, sequenceid=99, compaction requested=false 2023-05-31 10:57:34,529 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:34,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:34,529 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 10:57:34,540 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=120 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/78ec41b0faa74e398c42caea8820b464 2023-05-31 10:57:34,546 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/78ec41b0faa74e398c42caea8820b464 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/78ec41b0faa74e398c42caea8820b464 2023-05-31 10:57:34,551 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/78ec41b0faa74e398c42caea8820b464, entries=18, sequenceid=120, filesize=23.7 K 2023-05-31 10:57:34,552 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=7.36 KB/7532 for 72200ad8310565f077bdbf7870786701 in 23ms, sequenceid=120, compaction requested=true 2023-05-31 10:57:34,552 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:34,552 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 10:57:34,552 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:57:34,553 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 44914 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:57:34,553 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:57:34,554 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:34,554 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ff5ae3c6c7e14120a6e5dae6a3cb60a1, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e1bc496a2f6e4f9c89acf837e0874ffa, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/78ec41b0faa74e398c42caea8820b464] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=43.9 K 2023-05-31 10:57:34,554 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting ff5ae3c6c7e14120a6e5dae6a3cb60a1, keycount=3, bloomtype=ROW, size=8.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685530632356 2023-05-31 10:57:34,555 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting e1bc496a2f6e4f9c89acf837e0874ffa, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=99, earliestPutTs=1685530654501 2023-05-31 10:57:34,555 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 78ec41b0faa74e398c42caea8820b464, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1685530654509 2023-05-31 10:57:34,566 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#39 average throughput is 28.73 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:34,578 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/c94a0be2b1974d58b64444e1927be36b as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/c94a0be2b1974d58b64444e1927be36b 2023-05-31 10:57:34,584 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into c94a0be2b1974d58b64444e1927be36b(size=34.5 K), total size for store is 34.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:34,584 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:34,584 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530654552; duration=0sec 2023-05-31 10:57:34,585 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:36,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:36,543 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-31 10:57:36,555 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/0e496297cc0846e98c1771ccea813de4 2023-05-31 10:57:36,562 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/0e496297cc0846e98c1771ccea813de4 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0e496297cc0846e98c1771ccea813de4 2023-05-31 10:57:36,568 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0e496297cc0846e98c1771ccea813de4, entries=8, sequenceid=132, filesize=13.2 K 2023-05-31 10:57:36,569 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=17.86 KB/18292 for 72200ad8310565f077bdbf7870786701 in 26ms, sequenceid=132, compaction requested=false 2023-05-31 10:57:36,569 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:36,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:36,570 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 10:57:36,583 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 10:57:36,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:36176 deadline: 1685530666583, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:36,589 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=153 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/a8e57c70b1644aef800776ec120c38a5 2023-05-31 10:57:36,595 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/a8e57c70b1644aef800776ec120c38a5 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a8e57c70b1644aef800776ec120c38a5 2023-05-31 10:57:36,600 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a8e57c70b1644aef800776ec120c38a5, entries=18, sequenceid=153, filesize=23.7 K 2023-05-31 10:57:36,601 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 72200ad8310565f077bdbf7870786701 in 31ms, sequenceid=153, compaction requested=true 2023-05-31 10:57:36,601 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:36,601 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 10:57:36,601 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:57:36,603 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 73098 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:57:36,603 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:57:36,603 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:36,603 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/c94a0be2b1974d58b64444e1927be36b, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0e496297cc0846e98c1771ccea813de4, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a8e57c70b1644aef800776ec120c38a5] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=71.4 K 2023-05-31 10:57:36,603 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.Compactor(207): Compacting c94a0be2b1974d58b64444e1927be36b, keycount=28, bloomtype=ROW, size=34.5 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1685530632356 2023-05-31 10:57:36,604 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.Compactor(207): Compacting 0e496297cc0846e98c1771ccea813de4, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1685530654530 2023-05-31 10:57:36,604 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] compactions.Compactor(207): Compacting a8e57c70b1644aef800776ec120c38a5, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=153, earliestPutTs=1685530656545 2023-05-31 10:57:36,616 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#42 average throughput is 27.71 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:36,632 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/eb3cb20b17814e0c91eaf7d96a44836a as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/eb3cb20b17814e0c91eaf7d96a44836a 2023-05-31 10:57:36,637 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into eb3cb20b17814e0c91eaf7d96a44836a(size=62.0 K), total size for store is 62.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:36,637 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:36,637 INFO [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530656601; duration=0sec 2023-05-31 10:57:36,637 DEBUG [RS:0;jenkins-hbase20:36333-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:45,321 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=33, reuseRatio=71.74% 2023-05-31 10:57:45,322 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-31 10:57:46,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:46,606 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 10:57:46,620 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=169 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/2397082b2f8c46c8901ada00335f669e 2023-05-31 10:57:46,628 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/2397082b2f8c46c8901ada00335f669e as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/2397082b2f8c46c8901ada00335f669e 2023-05-31 10:57:46,633 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/2397082b2f8c46c8901ada00335f669e, entries=12, sequenceid=169, filesize=17.4 K 2023-05-31 10:57:46,634 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 72200ad8310565f077bdbf7870786701 in 28ms, sequenceid=169, compaction requested=false 2023-05-31 10:57:46,634 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:48,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:48,627 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:57:48,641 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=179 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/d4ba5f0cc1d54b6994145e6d9c31918a 2023-05-31 10:57:48,647 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/d4ba5f0cc1d54b6994145e6d9c31918a as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/d4ba5f0cc1d54b6994145e6d9c31918a 2023-05-31 10:57:48,653 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/d4ba5f0cc1d54b6994145e6d9c31918a, entries=7, sequenceid=179, filesize=12.1 K 2023-05-31 10:57:48,654 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 72200ad8310565f077bdbf7870786701 in 27ms, sequenceid=179, compaction requested=true 2023-05-31 10:57:48,654 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:48,654 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:48,654 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:57:48,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:48,654 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-31 10:57:48,656 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93734 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:57:48,656 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:57:48,656 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:48,656 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/eb3cb20b17814e0c91eaf7d96a44836a, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/2397082b2f8c46c8901ada00335f669e, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/d4ba5f0cc1d54b6994145e6d9c31918a] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=91.5 K 2023-05-31 10:57:48,656 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting eb3cb20b17814e0c91eaf7d96a44836a, keycount=54, bloomtype=ROW, size=62.0 K, encoding=NONE, compression=NONE, seqNum=153, earliestPutTs=1685530632356 2023-05-31 10:57:48,657 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 2397082b2f8c46c8901ada00335f669e, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=169, earliestPutTs=1685530656570 2023-05-31 10:57:48,658 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting d4ba5f0cc1d54b6994145e6d9c31918a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1685530666607 2023-05-31 10:57:48,673 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=199 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/01664acbc93d43dcaf5bf3f9e2d70c02 2023-05-31 10:57:48,677 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#46 average throughput is 37.45 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:48,680 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/01664acbc93d43dcaf5bf3f9e2d70c02 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/01664acbc93d43dcaf5bf3f9e2d70c02 2023-05-31 10:57:48,688 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/01664acbc93d43dcaf5bf3f9e2d70c02, entries=17, sequenceid=199, filesize=22.7 K 2023-05-31 10:57:48,689 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=9.46 KB/9684 for 72200ad8310565f077bdbf7870786701 in 35ms, sequenceid=199, compaction requested=false 2023-05-31 10:57:48,689 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:48,694 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/12e017203ec74dd991a10395571fee58 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/12e017203ec74dd991a10395571fee58 2023-05-31 10:57:48,698 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into 12e017203ec74dd991a10395571fee58(size=82.2 K), total size for store is 104.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:48,698 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:48,698 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530668654; duration=0sec 2023-05-31 10:57:48,699 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:50,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:50,672 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-31 10:57:50,685 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=213 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/1a44fb2ffbd9473aa284d07ee8d459ac 2023-05-31 10:57:50,690 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/1a44fb2ffbd9473aa284d07ee8d459ac as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/1a44fb2ffbd9473aa284d07ee8d459ac 2023-05-31 10:57:50,695 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/1a44fb2ffbd9473aa284d07ee8d459ac, entries=10, sequenceid=213, filesize=15.3 K 2023-05-31 10:57:50,696 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=17.86 KB/18292 for 72200ad8310565f077bdbf7870786701 in 24ms, sequenceid=213, compaction requested=true 2023-05-31 10:57:50,696 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:50,696 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:50,696 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:57:50,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:57:50,697 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 10:57:50,697 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 123035 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:57:50,697 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:57:50,697 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:57:50,698 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/12e017203ec74dd991a10395571fee58, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/01664acbc93d43dcaf5bf3f9e2d70c02, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/1a44fb2ffbd9473aa284d07ee8d459ac] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=120.2 K 2023-05-31 10:57:50,698 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 12e017203ec74dd991a10395571fee58, keycount=73, bloomtype=ROW, size=82.2 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1685530632356 2023-05-31 10:57:50,698 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 01664acbc93d43dcaf5bf3f9e2d70c02, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=199, earliestPutTs=1685530668628 2023-05-31 10:57:50,699 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 1a44fb2ffbd9473aa284d07ee8d459ac, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685530668655 2023-05-31 10:57:50,705 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=234 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/dee166bcde3749eb90c2cc79d666af2d 2023-05-31 10:57:50,710 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#49 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:57:50,712 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/dee166bcde3749eb90c2cc79d666af2d as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/dee166bcde3749eb90c2cc79d666af2d 2023-05-31 10:57:50,712 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 10:57:50,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:36176 deadline: 1685530680712, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:57:50,719 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/dee166bcde3749eb90c2cc79d666af2d, entries=18, sequenceid=234, filesize=23.7 K 2023-05-31 10:57:50,720 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 72200ad8310565f077bdbf7870786701 in 23ms, sequenceid=234, compaction requested=false 2023-05-31 10:57:50,720 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:50,721 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/e2a9b8664368496d8667ac247b042c70 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e2a9b8664368496d8667ac247b042c70 2023-05-31 10:57:50,727 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into e2a9b8664368496d8667ac247b042c70(size=110.7 K), total size for store is 134.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:57:50,727 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:57:50,727 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530670696; duration=0sec 2023-05-31 10:57:50,727 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:57:52,318 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 10:58:00,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:00,783 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 10:58:00,799 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=250 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/24a6040140134cb49bf048c960050938 2023-05-31 10:58:00,806 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/24a6040140134cb49bf048c960050938 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/24a6040140134cb49bf048c960050938 2023-05-31 10:58:00,811 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/24a6040140134cb49bf048c960050938, entries=12, sequenceid=250, filesize=17.4 K 2023-05-31 10:58:00,811 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 72200ad8310565f077bdbf7870786701 in 28ms, sequenceid=250, compaction requested=true 2023-05-31 10:58:00,812 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:00,812 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 10:58:00,812 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:58:00,813 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155485 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:58:00,813 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:58:00,813 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:00,813 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e2a9b8664368496d8667ac247b042c70, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/dee166bcde3749eb90c2cc79d666af2d, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/24a6040140134cb49bf048c960050938] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=151.8 K 2023-05-31 10:58:00,813 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting e2a9b8664368496d8667ac247b042c70, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685530632356 2023-05-31 10:58:00,814 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting dee166bcde3749eb90c2cc79d666af2d, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=234, earliestPutTs=1685530670674 2023-05-31 10:58:00,814 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 24a6040140134cb49bf048c960050938, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685530670698 2023-05-31 10:58:00,823 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#51 average throughput is 66.70 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:58:00,831 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/a3bb6f2275b74d0081219ea86937c2ff as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a3bb6f2275b74d0081219ea86937c2ff 2023-05-31 10:58:00,837 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into a3bb6f2275b74d0081219ea86937c2ff(size=142.6 K), total size for store is 142.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:58:00,837 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:00,837 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530680812; duration=0sec 2023-05-31 10:58:00,837 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:58:02,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:02,807 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 10:58:02,820 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=261 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/f62e773173da4fb18c7264a40c642594 2023-05-31 10:58:02,826 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/f62e773173da4fb18c7264a40c642594 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/f62e773173da4fb18c7264a40c642594 2023-05-31 10:58:02,831 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/f62e773173da4fb18c7264a40c642594, entries=7, sequenceid=261, filesize=12.1 K 2023-05-31 10:58:02,831 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 72200ad8310565f077bdbf7870786701 in 24ms, sequenceid=261, compaction requested=false 2023-05-31 10:58:02,831 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:02,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:02,832 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-31 10:58:02,848 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=281 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/59b3a01435b148f7b97e2a8579a19e14 2023-05-31 10:58:02,853 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/59b3a01435b148f7b97e2a8579a19e14 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/59b3a01435b148f7b97e2a8579a19e14 2023-05-31 10:58:02,857 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/59b3a01435b148f7b97e2a8579a19e14, entries=17, sequenceid=281, filesize=22.7 K 2023-05-31 10:58:02,858 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=9.46 KB/9684 for 72200ad8310565f077bdbf7870786701 in 26ms, sequenceid=281, compaction requested=true 2023-05-31 10:58:02,858 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:02,858 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:58:02,858 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:58:02,859 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 181686 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:58:02,859 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:58:02,859 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:02,859 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a3bb6f2275b74d0081219ea86937c2ff, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/f62e773173da4fb18c7264a40c642594, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/59b3a01435b148f7b97e2a8579a19e14] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=177.4 K 2023-05-31 10:58:02,860 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting a3bb6f2275b74d0081219ea86937c2ff, keycount=130, bloomtype=ROW, size=142.6 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685530632356 2023-05-31 10:58:02,860 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting f62e773173da4fb18c7264a40c642594, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=261, earliestPutTs=1685530680784 2023-05-31 10:58:02,860 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 59b3a01435b148f7b97e2a8579a19e14, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=281, earliestPutTs=1685530682808 2023-05-31 10:58:02,873 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#54 average throughput is 52.68 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:58:02,886 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/6ab507c7c77c48629debb933357008b5 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/6ab507c7c77c48629debb933357008b5 2023-05-31 10:58:02,892 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into 6ab507c7c77c48629debb933357008b5(size=168.0 K), total size for store is 168.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:58:02,892 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:02,892 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530682858; duration=0sec 2023-05-31 10:58:02,892 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:58:04,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:04,847 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-31 10:58:04,862 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/95e9e3a7940446059e6c31a8bb84ce2f 2023-05-31 10:58:04,867 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/95e9e3a7940446059e6c31a8bb84ce2f as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/95e9e3a7940446059e6c31a8bb84ce2f 2023-05-31 10:58:04,873 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/95e9e3a7940446059e6c31a8bb84ce2f, entries=10, sequenceid=295, filesize=15.3 K 2023-05-31 10:58:04,873 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=17.86 KB/18292 for 72200ad8310565f077bdbf7870786701 in 26ms, sequenceid=295, compaction requested=false 2023-05-31 10:58:04,873 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:04,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:04,874 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 10:58:04,886 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 10:58:04,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] ipc.CallRunner(144): callId: 273 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:36176 deadline: 1685530694886, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=72200ad8310565f077bdbf7870786701, server=jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:58:04,887 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=316 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/253056d374074b888b1fc17d17a37f8f 2023-05-31 10:58:04,891 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/253056d374074b888b1fc17d17a37f8f as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/253056d374074b888b1fc17d17a37f8f 2023-05-31 10:58:04,896 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/253056d374074b888b1fc17d17a37f8f, entries=18, sequenceid=316, filesize=23.7 K 2023-05-31 10:58:04,897 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 72200ad8310565f077bdbf7870786701 in 23ms, sequenceid=316, compaction requested=true 2023-05-31 10:58:04,897 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:04,897 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 10:58:04,897 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 10:58:04,898 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 212008 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 10:58:04,898 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1912): 72200ad8310565f077bdbf7870786701/info is initiating minor compaction (all files) 2023-05-31 10:58:04,898 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 72200ad8310565f077bdbf7870786701/info in TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:04,898 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/6ab507c7c77c48629debb933357008b5, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/95e9e3a7940446059e6c31a8bb84ce2f, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/253056d374074b888b1fc17d17a37f8f] into tmpdir=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp, totalSize=207.0 K 2023-05-31 10:58:04,898 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 6ab507c7c77c48629debb933357008b5, keycount=154, bloomtype=ROW, size=168.0 K, encoding=NONE, compression=NONE, seqNum=281, earliestPutTs=1685530632356 2023-05-31 10:58:04,899 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 95e9e3a7940446059e6c31a8bb84ce2f, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1685530682832 2023-05-31 10:58:04,899 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] compactions.Compactor(207): Compacting 253056d374074b888b1fc17d17a37f8f, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=316, earliestPutTs=1685530684849 2023-05-31 10:58:04,908 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] throttle.PressureAwareThroughputController(145): 72200ad8310565f077bdbf7870786701#info#compaction#57 average throughput is 93.38 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 10:58:04,918 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/0885e1784144431a8b5fa5f1003c4fb2 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0885e1784144431a8b5fa5f1003c4fb2 2023-05-31 10:58:04,924 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 72200ad8310565f077bdbf7870786701/info of 72200ad8310565f077bdbf7870786701 into 0885e1784144431a8b5fa5f1003c4fb2(size=197.6 K), total size for store is 197.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 10:58:04,924 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:04,924 INFO [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., storeName=72200ad8310565f077bdbf7870786701/info, priority=13, startTime=1685530684897; duration=0sec 2023-05-31 10:58:04,924 DEBUG [RS:0;jenkins-hbase20:36333-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 10:58:14,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36333] regionserver.HRegion(9158): Flush requested on 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:14,940 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 10:58:14,956 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=332 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/ab972878d70b474abf8ce2351694da47 2023-05-31 10:58:14,962 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/ab972878d70b474abf8ce2351694da47 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ab972878d70b474abf8ce2351694da47 2023-05-31 10:58:14,968 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ab972878d70b474abf8ce2351694da47, entries=12, sequenceid=332, filesize=17.4 K 2023-05-31 10:58:14,969 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 72200ad8310565f077bdbf7870786701 in 29ms, sequenceid=332, compaction requested=false 2023-05-31 10:58:14,969 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:16,943 INFO [Listener at localhost.localdomain/34183] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-31 10:58:16,972 INFO [Listener at localhost.localdomain/34183] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530619582 with entries=316, filesize=309.16 KB; new WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530696943 2023-05-31 10:58:16,972 DEBUG [Listener at localhost.localdomain/34183] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34255,DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff,DISK], DatanodeInfoWithStorage[127.0.0.1:33157,DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0,DISK]] 2023-05-31 10:58:16,972 DEBUG [Listener at localhost.localdomain/34183] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530619582 is not closed yet, will try archiving it next time 2023-05-31 10:58:16,981 INFO [Listener at localhost.localdomain/34183] regionserver.HRegion(2745): Flushing ac565024d7501960057caf2cf4ed562d 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 10:58:16,992 INFO [Listener at localhost.localdomain/34183] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/.tmp/info/00b1f540517e47ecad1c8c2b659bfa61 2023-05-31 10:58:16,997 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/.tmp/info/00b1f540517e47ecad1c8c2b659bfa61 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/info/00b1f540517e47ecad1c8c2b659bfa61 2023-05-31 10:58:17,004 INFO [Listener at localhost.localdomain/34183] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/info/00b1f540517e47ecad1c8c2b659bfa61, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 10:58:17,005 INFO [Listener at localhost.localdomain/34183] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for ac565024d7501960057caf2cf4ed562d in 24ms, sequenceid=6, compaction requested=false 2023-05-31 10:58:17,006 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegion(2446): Flush status journal for ac565024d7501960057caf2cf4ed562d: 2023-05-31 10:58:17,006 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegion(2446): Flush status journal for 5c552d47aa2e590e40612dc2b820a3f7: 2023-05-31 10:58:17,006 INFO [Listener at localhost.localdomain/34183] regionserver.HRegion(2745): Flushing 72200ad8310565f077bdbf7870786701 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 10:58:17,016 INFO [Listener at localhost.localdomain/34183] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=336 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/31287371009c40d6ba69b3b645bcdbac 2023-05-31 10:58:17,025 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/.tmp/info/31287371009c40d6ba69b3b645bcdbac as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/31287371009c40d6ba69b3b645bcdbac 2023-05-31 10:58:17,030 INFO [Listener at localhost.localdomain/34183] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/31287371009c40d6ba69b3b645bcdbac, entries=1, sequenceid=336, filesize=5.8 K 2023-05-31 10:58:17,030 INFO [Listener at localhost.localdomain/34183] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 72200ad8310565f077bdbf7870786701 in 24ms, sequenceid=336, compaction requested=true 2023-05-31 10:58:17,031 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegion(2446): Flush status journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:17,031 INFO [Listener at localhost.localdomain/34183] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-31 10:58:17,037 INFO [Listener at localhost.localdomain/34183] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/.tmp/info/7d771bf45e06437da553e4450c7ec2e5 2023-05-31 10:58:17,042 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/.tmp/info/7d771bf45e06437da553e4450c7ec2e5 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info/7d771bf45e06437da553e4450c7ec2e5 2023-05-31 10:58:17,046 INFO [Listener at localhost.localdomain/34183] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/info/7d771bf45e06437da553e4450c7ec2e5, entries=16, sequenceid=24, filesize=7.0 K 2023-05-31 10:58:17,047 INFO [Listener at localhost.localdomain/34183] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 16ms, sequenceid=24, compaction requested=false 2023-05-31 10:58:17,047 DEBUG [Listener at localhost.localdomain/34183] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 10:58:17,055 INFO [Listener at localhost.localdomain/34183] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530696943 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530697048 2023-05-31 10:58:17,056 DEBUG [Listener at localhost.localdomain/34183] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34255,DS-f7fb2d32-c24c-4f53-b580-9212bddbd2ff,DISK], DatanodeInfoWithStorage[127.0.0.1:33157,DS-f3e6c67a-1d34-4350-87f8-531f2e4446c0,DISK]] 2023-05-31 10:58:17,056 DEBUG [Listener at localhost.localdomain/34183] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530696943 is not closed yet, will try archiving it next time 2023-05-31 10:58:17,056 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530619582 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/oldWALs/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530619582 2023-05-31 10:58:17,057 INFO [Listener at localhost.localdomain/34183] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 10:58:17,059 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530696943 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/oldWALs/jenkins-hbase20.apache.org%2C36333%2C1685530619184.1685530696943 2023-05-31 10:58:17,157 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 10:58:17,158 INFO [Listener at localhost.localdomain/34183] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 10:58:17,158 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5606cb04 to 127.0.0.1:57094 2023-05-31 10:58:17,158 DEBUG [Listener at localhost.localdomain/34183] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:17,158 DEBUG [Listener at localhost.localdomain/34183] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 10:58:17,158 DEBUG [Listener at localhost.localdomain/34183] util.JVMClusterUtil(257): Found active master hash=1571954614, stopped=false 2023-05-31 10:58:17,158 INFO [Listener at localhost.localdomain/34183] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:58:17,160 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:58:17,160 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:58:17,160 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:17,160 INFO [Listener at localhost.localdomain/34183] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 10:58:17,160 DEBUG [Listener at localhost.localdomain/34183] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e3c90bd to 127.0.0.1:57094 2023-05-31 10:58:17,161 DEBUG [Listener at localhost.localdomain/34183] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:17,162 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:58:17,162 INFO [Listener at localhost.localdomain/34183] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36333,1685530619184' ***** 2023-05-31 10:58:17,162 INFO [Listener at localhost.localdomain/34183] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:58:17,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:58:17,162 INFO [RS:0;jenkins-hbase20:36333] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:58:17,162 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:58:17,162 INFO [RS:0;jenkins-hbase20:36333] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:58:17,162 INFO [RS:0;jenkins-hbase20:36333] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:58:17,163 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(3303): Received CLOSE for ac565024d7501960057caf2cf4ed562d 2023-05-31 10:58:17,163 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(3303): Received CLOSE for 5c552d47aa2e590e40612dc2b820a3f7 2023-05-31 10:58:17,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing ac565024d7501960057caf2cf4ed562d, disabling compactions & flushes 2023-05-31 10:58:17,163 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(3303): Received CLOSE for 72200ad8310565f077bdbf7870786701 2023-05-31 10:58:17,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:58:17,163 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:58:17,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:58:17,163 DEBUG [RS:0;jenkins-hbase20:36333] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x639c4780 to 127.0.0.1:57094 2023-05-31 10:58:17,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. after waiting 0 ms 2023-05-31 10:58:17,163 DEBUG [RS:0;jenkins-hbase20:36333] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:17,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:58:17,164 INFO [RS:0;jenkins-hbase20:36333] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:58:17,164 INFO [RS:0;jenkins-hbase20:36333] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:58:17,164 INFO [RS:0;jenkins-hbase20:36333] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:58:17,164 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:58:17,165 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-31 10:58:17,166 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1478): Online Regions={ac565024d7501960057caf2cf4ed562d=hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d., 5c552d47aa2e590e40612dc2b820a3f7=TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7., 72200ad8310565f077bdbf7870786701=TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 10:58:17,166 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:58:17,166 DEBUG [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1504): Waiting on 1588230740, 5c552d47aa2e590e40612dc2b820a3f7, 72200ad8310565f077bdbf7870786701, ac565024d7501960057caf2cf4ed562d 2023-05-31 10:58:17,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:58:17,168 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:58:17,168 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:58:17,168 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:58:17,176 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/namespace/ac565024d7501960057caf2cf4ed562d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 10:58:17,177 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:58:17,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for ac565024d7501960057caf2cf4ed562d: 2023-05-31 10:58:17,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685530619722.ac565024d7501960057caf2cf4ed562d. 2023-05-31 10:58:17,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5c552d47aa2e590e40612dc2b820a3f7, disabling compactions & flushes 2023-05-31 10:58:17,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-31 10:58:17,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:58:17,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:58:17,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. after waiting 0 ms 2023-05-31 10:58:17,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:58:17,179 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d->hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d-bottom] to archive 2023-05-31 10:58:17,180 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 10:58:17,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:58:17,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:58:17,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 10:58:17,181 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 10:58:17,183 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:58:17,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/5c552d47aa2e590e40612dc2b820a3f7/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=88 2023-05-31 10:58:17,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:58:17,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5c552d47aa2e590e40612dc2b820a3f7: 2023-05-31 10:58:17,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685530642524.5c552d47aa2e590e40612dc2b820a3f7. 2023-05-31 10:58:17,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 72200ad8310565f077bdbf7870786701, disabling compactions & flushes 2023-05-31 10:58:17,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:17,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:17,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. after waiting 0 ms 2023-05-31 10:58:17,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:17,206 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d->hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/2eb3b6d0803f7bc80b97fbab5624c07d/info/5fd0d091606a40ffba8281dd75b2448d-top, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ff5ae3c6c7e14120a6e5dae6a3cb60a1, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e1bc496a2f6e4f9c89acf837e0874ffa, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/c94a0be2b1974d58b64444e1927be36b, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/78ec41b0faa74e398c42caea8820b464, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0e496297cc0846e98c1771ccea813de4, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/eb3cb20b17814e0c91eaf7d96a44836a, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a8e57c70b1644aef800776ec120c38a5, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/2397082b2f8c46c8901ada00335f669e, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/12e017203ec74dd991a10395571fee58, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/d4ba5f0cc1d54b6994145e6d9c31918a, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/01664acbc93d43dcaf5bf3f9e2d70c02, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e2a9b8664368496d8667ac247b042c70, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/1a44fb2ffbd9473aa284d07ee8d459ac, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/dee166bcde3749eb90c2cc79d666af2d, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a3bb6f2275b74d0081219ea86937c2ff, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/24a6040140134cb49bf048c960050938, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/f62e773173da4fb18c7264a40c642594, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/6ab507c7c77c48629debb933357008b5, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/59b3a01435b148f7b97e2a8579a19e14, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/95e9e3a7940446059e6c31a8bb84ce2f, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/253056d374074b888b1fc17d17a37f8f] to archive 2023-05-31 10:58:17,207 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 10:58:17,208 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/5fd0d091606a40ffba8281dd75b2448d.2eb3b6d0803f7bc80b97fbab5624c07d 2023-05-31 10:58:17,209 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ff5ae3c6c7e14120a6e5dae6a3cb60a1 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/ff5ae3c6c7e14120a6e5dae6a3cb60a1 2023-05-31 10:58:17,210 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/TestLogRolling-testLogRolling=2eb3b6d0803f7bc80b97fbab5624c07d-01c2bf575b9b4b30941e43455a755bfb 2023-05-31 10:58:17,211 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e1bc496a2f6e4f9c89acf837e0874ffa to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e1bc496a2f6e4f9c89acf837e0874ffa 2023-05-31 10:58:17,212 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/c94a0be2b1974d58b64444e1927be36b to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/c94a0be2b1974d58b64444e1927be36b 2023-05-31 10:58:17,213 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/78ec41b0faa74e398c42caea8820b464 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/78ec41b0faa74e398c42caea8820b464 2023-05-31 10:58:17,214 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0e496297cc0846e98c1771ccea813de4 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/0e496297cc0846e98c1771ccea813de4 2023-05-31 10:58:17,216 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/eb3cb20b17814e0c91eaf7d96a44836a to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/eb3cb20b17814e0c91eaf7d96a44836a 2023-05-31 10:58:17,217 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a8e57c70b1644aef800776ec120c38a5 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a8e57c70b1644aef800776ec120c38a5 2023-05-31 10:58:17,218 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/2397082b2f8c46c8901ada00335f669e to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/2397082b2f8c46c8901ada00335f669e 2023-05-31 10:58:17,219 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/12e017203ec74dd991a10395571fee58 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/12e017203ec74dd991a10395571fee58 2023-05-31 10:58:17,220 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/d4ba5f0cc1d54b6994145e6d9c31918a to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/d4ba5f0cc1d54b6994145e6d9c31918a 2023-05-31 10:58:17,221 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/01664acbc93d43dcaf5bf3f9e2d70c02 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/01664acbc93d43dcaf5bf3f9e2d70c02 2023-05-31 10:58:17,222 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e2a9b8664368496d8667ac247b042c70 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/e2a9b8664368496d8667ac247b042c70 2023-05-31 10:58:17,223 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/1a44fb2ffbd9473aa284d07ee8d459ac to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/1a44fb2ffbd9473aa284d07ee8d459ac 2023-05-31 10:58:17,225 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/dee166bcde3749eb90c2cc79d666af2d to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/dee166bcde3749eb90c2cc79d666af2d 2023-05-31 10:58:17,226 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a3bb6f2275b74d0081219ea86937c2ff to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/a3bb6f2275b74d0081219ea86937c2ff 2023-05-31 10:58:17,228 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/24a6040140134cb49bf048c960050938 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/24a6040140134cb49bf048c960050938 2023-05-31 10:58:17,229 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/f62e773173da4fb18c7264a40c642594 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/f62e773173da4fb18c7264a40c642594 2023-05-31 10:58:17,230 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/6ab507c7c77c48629debb933357008b5 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/6ab507c7c77c48629debb933357008b5 2023-05-31 10:58:17,232 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/59b3a01435b148f7b97e2a8579a19e14 to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/59b3a01435b148f7b97e2a8579a19e14 2023-05-31 10:58:17,233 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/95e9e3a7940446059e6c31a8bb84ce2f to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/95e9e3a7940446059e6c31a8bb84ce2f 2023-05-31 10:58:17,234 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/253056d374074b888b1fc17d17a37f8f to hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/archive/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/info/253056d374074b888b1fc17d17a37f8f 2023-05-31 10:58:17,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/data/default/TestLogRolling-testLogRolling/72200ad8310565f077bdbf7870786701/recovered.edits/339.seqid, newMaxSeqId=339, maxSeqId=88 2023-05-31 10:58:17,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:17,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 72200ad8310565f077bdbf7870786701: 2023-05-31 10:58:17,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685530642524.72200ad8310565f077bdbf7870786701. 2023-05-31 10:58:17,367 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36333,1685530619184; all regions closed. 2023-05-31 10:58:17,368 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:58:17,380 DEBUG [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/oldWALs 2023-05-31 10:58:17,380 INFO [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C36333%2C1685530619184.meta:.meta(num 1685530619673) 2023-05-31 10:58:17,380 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/WALs/jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:58:17,388 DEBUG [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/oldWALs 2023-05-31 10:58:17,388 INFO [RS:0;jenkins-hbase20:36333] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C36333%2C1685530619184:(num 1685530697048) 2023-05-31 10:58:17,388 DEBUG [RS:0;jenkins-hbase20:36333] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:17,388 INFO [RS:0;jenkins-hbase20:36333] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:58:17,389 INFO [RS:0;jenkins-hbase20:36333] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-31 10:58:17,389 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:58:17,389 INFO [RS:0;jenkins-hbase20:36333] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36333 2023-05-31 10:58:17,392 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:58:17,392 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36333,1685530619184 2023-05-31 10:58:17,392 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:58:17,392 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36333,1685530619184] 2023-05-31 10:58:17,392 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36333,1685530619184; numProcessing=1 2023-05-31 10:58:17,393 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36333,1685530619184 already deleted, retry=false 2023-05-31 10:58:17,393 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36333,1685530619184 expired; onlineServers=0 2023-05-31 10:58:17,393 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,37771,1685530619142' ***** 2023-05-31 10:58:17,393 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 10:58:17,393 DEBUG [M:0;jenkins-hbase20:37771] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@598d90d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:58:17,393 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:58:17,393 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37771,1685530619142; all regions closed. 2023-05-31 10:58:17,393 DEBUG [M:0;jenkins-hbase20:37771] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:17,394 DEBUG [M:0;jenkins-hbase20:37771] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 10:58:17,394 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 10:58:17,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530619318] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530619318,5,FailOnTimeoutGroup] 2023-05-31 10:58:17,394 DEBUG [M:0;jenkins-hbase20:37771] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 10:58:17,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530619318] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530619318,5,FailOnTimeoutGroup] 2023-05-31 10:58:17,395 INFO [M:0;jenkins-hbase20:37771] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 10:58:17,395 INFO [M:0;jenkins-hbase20:37771] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 10:58:17,395 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 10:58:17,395 INFO [M:0;jenkins-hbase20:37771] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-31 10:58:17,395 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:17,395 DEBUG [M:0;jenkins-hbase20:37771] master.HMaster(1512): Stopping service threads 2023-05-31 10:58:17,395 INFO [M:0;jenkins-hbase20:37771] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 10:58:17,396 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:58:17,396 ERROR [M:0;jenkins-hbase20:37771] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 10:58:17,396 INFO [M:0;jenkins-hbase20:37771] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 10:58:17,396 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 10:58:17,396 DEBUG [M:0;jenkins-hbase20:37771] zookeeper.ZKUtil(398): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 10:58:17,396 WARN [M:0;jenkins-hbase20:37771] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 10:58:17,396 INFO [M:0;jenkins-hbase20:37771] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 10:58:17,396 INFO [M:0;jenkins-hbase20:37771] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 10:58:17,396 DEBUG [M:0;jenkins-hbase20:37771] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:58:17,397 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:17,397 DEBUG [M:0;jenkins-hbase20:37771] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:17,397 DEBUG [M:0;jenkins-hbase20:37771] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:58:17,397 DEBUG [M:0;jenkins-hbase20:37771] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:17,397 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-05-31 10:58:17,407 INFO [M:0;jenkins-hbase20:37771] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/47a6efbc61984b5ab006aa924c063018 2023-05-31 10:58:17,411 INFO [M:0;jenkins-hbase20:37771] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47a6efbc61984b5ab006aa924c063018 2023-05-31 10:58:17,412 DEBUG [M:0;jenkins-hbase20:37771] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/47a6efbc61984b5ab006aa924c063018 as hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/47a6efbc61984b5ab006aa924c063018 2023-05-31 10:58:17,417 INFO [M:0;jenkins-hbase20:37771] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47a6efbc61984b5ab006aa924c063018 2023-05-31 10:58:17,417 INFO [M:0;jenkins-hbase20:37771] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45345/user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/47a6efbc61984b5ab006aa924c063018, entries=18, sequenceid=160, filesize=6.9 K 2023-05-31 10:58:17,418 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=160, compaction requested=false 2023-05-31 10:58:17,420 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:17,420 DEBUG [M:0;jenkins-hbase20:37771] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:58:17,420 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/e8b9599e-0d1b-9489-3cff-62342fb03bd9/MasterData/WALs/jenkins-hbase20.apache.org,37771,1685530619142 2023-05-31 10:58:17,423 INFO [M:0;jenkins-hbase20:37771] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 10:58:17,423 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:58:17,424 INFO [M:0;jenkins-hbase20:37771] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37771 2023-05-31 10:58:17,425 DEBUG [M:0;jenkins-hbase20:37771] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,37771,1685530619142 already deleted, retry=false 2023-05-31 10:58:17,463 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:58:17,493 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:17,493 INFO [RS:0;jenkins-hbase20:36333] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36333,1685530619184; zookeeper connection closed. 2023-05-31 10:58:17,493 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): regionserver:36333-0x101a12a451b0001, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:17,494 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@35bf3a54] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@35bf3a54 2023-05-31 10:58:17,494 INFO [Listener at localhost.localdomain/34183] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 10:58:17,593 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:17,593 INFO [M:0;jenkins-hbase20:37771] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37771,1685530619142; zookeeper connection closed. 2023-05-31 10:58:17,594 DEBUG [Listener at localhost.localdomain/34183-EventThread] zookeeper.ZKWatcher(600): master:37771-0x101a12a451b0000, quorum=127.0.0.1:57094, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:17,597 WARN [Listener at localhost.localdomain/34183] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:58:17,606 INFO [Listener at localhost.localdomain/34183] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:58:17,713 WARN [BP-235010947-148.251.75.209-1685530618639 heartbeating to localhost.localdomain/127.0.0.1:45345] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:58:17,713 WARN [BP-235010947-148.251.75.209-1685530618639 heartbeating to localhost.localdomain/127.0.0.1:45345] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-235010947-148.251.75.209-1685530618639 (Datanode Uuid 3db69346-4937-4c0e-afa8-057c2023411c) service to localhost.localdomain/127.0.0.1:45345 2023-05-31 10:58:17,714 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/dfs/data/data3/current/BP-235010947-148.251.75.209-1685530618639] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:17,715 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/dfs/data/data4/current/BP-235010947-148.251.75.209-1685530618639] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:17,717 WARN [Listener at localhost.localdomain/34183] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:58:17,721 INFO [Listener at localhost.localdomain/34183] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:58:17,830 WARN [BP-235010947-148.251.75.209-1685530618639 heartbeating to localhost.localdomain/127.0.0.1:45345] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:58:17,830 WARN [BP-235010947-148.251.75.209-1685530618639 heartbeating to localhost.localdomain/127.0.0.1:45345] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-235010947-148.251.75.209-1685530618639 (Datanode Uuid e82f9b82-2071-4f5f-a83d-2024fe48b0d3) service to localhost.localdomain/127.0.0.1:45345 2023-05-31 10:58:17,831 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/dfs/data/data1/current/BP-235010947-148.251.75.209-1685530618639] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:17,831 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/cluster_ac2f1810-9200-1f40-45b1-bb67ac505ce6/dfs/data/data2/current/BP-235010947-148.251.75.209-1685530618639] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:17,849 INFO [Listener at localhost.localdomain/34183] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 10:58:17,971 INFO [Listener at localhost.localdomain/34183] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 10:58:18,001 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 10:58:18,009 INFO [Listener at localhost.localdomain/34183] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 95) - Thread LEAK? -, OpenFileDescriptor=531 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=47 (was 88), ProcessCount=165 (was 166), AvailableMemoryMB=7819 (was 8207) 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=531, MaxFileDescriptor=60000, SystemLoadAverage=47, ProcessCount=165, AvailableMemoryMB=7819 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/hadoop.log.dir so I do NOT create it in target/test-data/8d934caa-9378-6585-5500-17628779eee5 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/28ade37a-133b-2341-a956-d49f5a9518ca/hadoop.tmp.dir so I do NOT create it in target/test-data/8d934caa-9378-6585-5500-17628779eee5 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918, deleteOnExit=true 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/test.cache.data in system properties and HBase conf 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 10:58:18,017 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/hadoop.log.dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 10:58:18,018 DEBUG [Listener at localhost.localdomain/34183] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 10:58:18,018 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/nfs.dump.dir in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/java.io.tmpdir in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 10:58:18,019 INFO [Listener at localhost.localdomain/34183] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 10:58:18,020 WARN [Listener at localhost.localdomain/34183] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:58:18,021 WARN [Listener at localhost.localdomain/34183] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:58:18,022 WARN [Listener at localhost.localdomain/34183] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:58:18,047 WARN [Listener at localhost.localdomain/34183] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:58:18,048 INFO [Listener at localhost.localdomain/34183] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:58:18,052 INFO [Listener at localhost.localdomain/34183] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/java.io.tmpdir/Jetty_localhost_localdomain_38165_hdfs____bgxs05/webapp 2023-05-31 10:58:18,123 INFO [Listener at localhost.localdomain/34183] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38165 2023-05-31 10:58:18,124 WARN [Listener at localhost.localdomain/34183] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 10:58:18,125 WARN [Listener at localhost.localdomain/34183] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 10:58:18,125 WARN [Listener at localhost.localdomain/34183] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 10:58:18,154 WARN [Listener at localhost.localdomain/38283] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:58:18,166 WARN [Listener at localhost.localdomain/38283] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:58:18,168 WARN [Listener at localhost.localdomain/38283] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:58:18,169 INFO [Listener at localhost.localdomain/38283] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:58:18,174 INFO [Listener at localhost.localdomain/38283] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/java.io.tmpdir/Jetty_localhost_45877_datanode____6ut74u/webapp 2023-05-31 10:58:18,246 INFO [Listener at localhost.localdomain/38283] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45877 2023-05-31 10:58:18,252 WARN [Listener at localhost.localdomain/36451] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:58:18,265 WARN [Listener at localhost.localdomain/36451] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 10:58:18,268 WARN [Listener at localhost.localdomain/36451] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 10:58:18,270 INFO [Listener at localhost.localdomain/36451] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 10:58:18,274 INFO [Listener at localhost.localdomain/36451] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/java.io.tmpdir/Jetty_localhost_42755_datanode____xohq9s/webapp 2023-05-31 10:58:18,317 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x329cd3a86f3db298: Processing first storage report for DS-5117327e-e70f-4215-a7bf-0a0d0782767e from datanode c23a5858-0b2f-4610-a543-7e81cd1d39e2 2023-05-31 10:58:18,317 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x329cd3a86f3db298: from storage DS-5117327e-e70f-4215-a7bf-0a0d0782767e node DatanodeRegistration(127.0.0.1:42077, datanodeUuid=c23a5858-0b2f-4610-a543-7e81cd1d39e2, infoPort=44573, infoSecurePort=0, ipcPort=36451, storageInfo=lv=-57;cid=testClusterID;nsid=1603506550;c=1685530698023), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:58:18,317 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x329cd3a86f3db298: Processing first storage report for DS-21d5814f-842f-4b05-b15a-1c4a33c1fd08 from datanode c23a5858-0b2f-4610-a543-7e81cd1d39e2 2023-05-31 10:58:18,317 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x329cd3a86f3db298: from storage DS-21d5814f-842f-4b05-b15a-1c4a33c1fd08 node DatanodeRegistration(127.0.0.1:42077, datanodeUuid=c23a5858-0b2f-4610-a543-7e81cd1d39e2, infoPort=44573, infoSecurePort=0, ipcPort=36451, storageInfo=lv=-57;cid=testClusterID;nsid=1603506550;c=1685530698023), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:58:18,370 INFO [Listener at localhost.localdomain/36451] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42755 2023-05-31 10:58:18,377 WARN [Listener at localhost.localdomain/46143] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 10:58:18,438 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x559979154f6c043: Processing first storage report for DS-3644216a-678e-495d-aa92-6891291b43cf from datanode 9994f6d1-f73d-4389-9d94-3f6c2f6f034f 2023-05-31 10:58:18,438 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x559979154f6c043: from storage DS-3644216a-678e-495d-aa92-6891291b43cf node DatanodeRegistration(127.0.0.1:37257, datanodeUuid=9994f6d1-f73d-4389-9d94-3f6c2f6f034f, infoPort=42219, infoSecurePort=0, ipcPort=46143, storageInfo=lv=-57;cid=testClusterID;nsid=1603506550;c=1685530698023), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:58:18,438 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x559979154f6c043: Processing first storage report for DS-56d71b75-2b82-4a38-a2a4-87ae2c88ba9a from datanode 9994f6d1-f73d-4389-9d94-3f6c2f6f034f 2023-05-31 10:58:18,438 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x559979154f6c043: from storage DS-56d71b75-2b82-4a38-a2a4-87ae2c88ba9a node DatanodeRegistration(127.0.0.1:37257, datanodeUuid=9994f6d1-f73d-4389-9d94-3f6c2f6f034f, infoPort=42219, infoSecurePort=0, ipcPort=46143, storageInfo=lv=-57;cid=testClusterID;nsid=1603506550;c=1685530698023), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 10:58:18,486 DEBUG [Listener at localhost.localdomain/46143] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5 2023-05-31 10:58:18,489 INFO [Listener at localhost.localdomain/46143] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/zookeeper_0, clientPort=64821, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 10:58:18,490 INFO [Listener at localhost.localdomain/46143] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64821 2023-05-31 10:58:18,491 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,492 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,512 INFO [Listener at localhost.localdomain/46143] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110 with version=8 2023-05-31 10:58:18,512 INFO [Listener at localhost.localdomain/46143] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:40463/user/jenkins/test-data/453f7dc9-7b44-70ec-368b-24ee2cf49cb2/hbase-staging 2023-05-31 10:58:18,514 INFO [Listener at localhost.localdomain/46143] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:58:18,514 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:58:18,515 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:58:18,515 INFO [Listener at localhost.localdomain/46143] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:58:18,515 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:58:18,515 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:58:18,515 INFO [Listener at localhost.localdomain/46143] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:58:18,516 INFO [Listener at localhost.localdomain/46143] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38971 2023-05-31 10:58:18,517 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,518 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,519 INFO [Listener at localhost.localdomain/46143] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38971 connecting to ZooKeeper ensemble=127.0.0.1:64821 2023-05-31 10:58:18,523 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:389710x0, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:58:18,524 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38971-0x101a12b7b230000 connected 2023-05-31 10:58:18,533 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:58:18,533 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:58:18,534 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:58:18,534 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38971 2023-05-31 10:58:18,535 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38971 2023-05-31 10:58:18,535 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38971 2023-05-31 10:58:18,535 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38971 2023-05-31 10:58:18,535 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38971 2023-05-31 10:58:18,536 INFO [Listener at localhost.localdomain/46143] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110, hbase.cluster.distributed=false 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 10:58:18,549 INFO [Listener at localhost.localdomain/46143] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 10:58:18,551 INFO [Listener at localhost.localdomain/46143] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44381 2023-05-31 10:58:18,551 INFO [Listener at localhost.localdomain/46143] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 10:58:18,552 DEBUG [Listener at localhost.localdomain/46143] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 10:58:18,552 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,553 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,553 INFO [Listener at localhost.localdomain/46143] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44381 connecting to ZooKeeper ensemble=127.0.0.1:64821 2023-05-31 10:58:18,565 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:443810x0, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 10:58:18,567 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44381-0x101a12b7b230001 connected 2023-05-31 10:58:18,567 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ZKUtil(164): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:58:18,567 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ZKUtil(164): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:58:18,568 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ZKUtil(164): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 10:58:18,568 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44381 2023-05-31 10:58:18,568 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44381 2023-05-31 10:58:18,570 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44381 2023-05-31 10:58:18,570 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44381 2023-05-31 10:58:18,570 DEBUG [Listener at localhost.localdomain/46143] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44381 2023-05-31 10:58:18,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,573 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:58:18,573 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,574 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:58:18,574 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 10:58:18,575 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,575 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:58:18,576 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,38971,1685530698514 from backup master directory 2023-05-31 10:58:18,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 10:58:18,577 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,577 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 10:58:18,577 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:58:18,577 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,589 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/hbase.id with ID: 95bed5de-90d7-410e-abe7-1ec568ecab72 2023-05-31 10:58:18,597 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:18,599 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,606 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x17909112 to 127.0.0.1:64821 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:58:18,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@502c4bfb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:58:18,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 10:58:18,614 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 10:58:18,614 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:58:18,615 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store-tmp 2023-05-31 10:58:18,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:58:18,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:58:18,623 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:18,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:18,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:58:18,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:18,623 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:18,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:58:18,624 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/WALs/jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,627 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38971%2C1685530698514, suffix=, logDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/WALs/jenkins-hbase20.apache.org,38971,1685530698514, archiveDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/oldWALs, maxLogs=10 2023-05-31 10:58:18,635 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/WALs/jenkins-hbase20.apache.org,38971,1685530698514/jenkins-hbase20.apache.org%2C38971%2C1685530698514.1685530698628 2023-05-31 10:58:18,635 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37257,DS-3644216a-678e-495d-aa92-6891291b43cf,DISK], DatanodeInfoWithStorage[127.0.0.1:42077,DS-5117327e-e70f-4215-a7bf-0a0d0782767e,DISK]] 2023-05-31 10:58:18,635 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:58:18,635 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:58:18,635 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:58:18,636 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:58:18,637 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:58:18,639 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 10:58:18,639 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 10:58:18,640 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:18,640 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:58:18,640 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:58:18,643 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 10:58:18,645 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:58:18,645 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=851848, jitterRate=0.08318153023719788}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:58:18,645 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:58:18,645 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 10:58:18,646 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 10:58:18,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 10:58:18,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 10:58:18,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 10:58:18,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 10:58:18,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 10:58:18,648 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 10:58:18,649 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 10:58:18,660 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 10:58:18,660 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 10:58:18,661 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 10:58:18,661 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 10:58:18,661 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 10:58:18,663 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,663 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 10:58:18,663 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 10:58:18,664 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 10:58:18,665 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:58:18,665 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 10:58:18,665 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,665 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,38971,1685530698514, sessionid=0x101a12b7b230000, setting cluster-up flag (Was=false) 2023-05-31 10:58:18,668 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,671 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 10:58:18,671 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,673 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,675 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 10:58:18,676 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:18,676 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/.hbase-snapshot/.tmp 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:58:18,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,681 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685530728681 2023-05-31 10:58:18,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 10:58:18,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 10:58:18,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 10:58:18,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 10:58:18,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 10:58:18,682 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 10:58:18,683 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,683 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:58:18,683 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 10:58:18,683 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 10:58:18,683 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 10:58:18,683 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 10:58:18,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 10:58:18,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 10:58:18,684 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530698684,5,FailOnTimeoutGroup] 2023-05-31 10:58:18,684 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530698684,5,FailOnTimeoutGroup] 2023-05-31 10:58:18,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 10:58:18,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,685 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:58:18,694 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:58:18,694 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 10:58:18,694 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110 2023-05-31 10:58:18,703 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:58:18,704 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:58:18,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/info 2023-05-31 10:58:18,706 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:58:18,706 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:18,706 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:58:18,707 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:58:18,707 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:58:18,708 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:18,708 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:58:18,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/table 2023-05-31 10:58:18,709 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:58:18,709 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:18,710 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740 2023-05-31 10:58:18,710 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740 2023-05-31 10:58:18,711 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:58:18,712 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:58:18,714 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:58:18,714 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=831871, jitterRate=0.057779401540756226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:58:18,714 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:58:18,714 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:58:18,714 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:58:18,714 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:58:18,715 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:58:18,715 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:58:18,715 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:58:18,715 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:58:18,716 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 10:58:18,716 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 10:58:18,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 10:58:18,718 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 10:58:18,719 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 10:58:18,773 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(951): ClusterId : 95bed5de-90d7-410e-abe7-1ec568ecab72 2023-05-31 10:58:18,773 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 10:58:18,775 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 10:58:18,775 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 10:58:18,778 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 10:58:18,779 DEBUG [RS:0;jenkins-hbase20:44381] zookeeper.ReadOnlyZKClient(139): Connect 0x28454e98 to 127.0.0.1:64821 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:58:18,789 DEBUG [RS:0;jenkins-hbase20:44381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2682b5e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:58:18,789 DEBUG [RS:0;jenkins-hbase20:44381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22bd25e8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:58:18,800 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:44381 2023-05-31 10:58:18,800 INFO [RS:0;jenkins-hbase20:44381] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 10:58:18,800 INFO [RS:0;jenkins-hbase20:44381] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 10:58:18,800 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 10:58:18,801 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,38971,1685530698514 with isa=jenkins-hbase20.apache.org/148.251.75.209:44381, startcode=1685530698549 2023-05-31 10:58:18,801 DEBUG [RS:0;jenkins-hbase20:44381] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 10:58:18,805 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52509, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 10:58:18,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38971] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:18,806 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110 2023-05-31 10:58:18,806 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38283 2023-05-31 10:58:18,806 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 10:58:18,808 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:58:18,808 DEBUG [RS:0;jenkins-hbase20:44381] zookeeper.ZKUtil(162): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:18,808 WARN [RS:0;jenkins-hbase20:44381] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 10:58:18,808 INFO [RS:0;jenkins-hbase20:44381] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:58:18,809 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:18,809 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44381,1685530698549] 2023-05-31 10:58:18,812 DEBUG [RS:0;jenkins-hbase20:44381] zookeeper.ZKUtil(162): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:18,813 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 10:58:18,813 INFO [RS:0;jenkins-hbase20:44381] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 10:58:18,814 INFO [RS:0;jenkins-hbase20:44381] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 10:58:18,815 INFO [RS:0;jenkins-hbase20:44381] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 10:58:18,815 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,815 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 10:58:18,816 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,816 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,817 DEBUG [RS:0;jenkins-hbase20:44381] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-31 10:58:18,817 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,817 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,818 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,830 INFO [RS:0;jenkins-hbase20:44381] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 10:58:18,830 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44381,1685530698549-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:18,838 INFO [RS:0;jenkins-hbase20:44381] regionserver.Replication(203): jenkins-hbase20.apache.org,44381,1685530698549 started 2023-05-31 10:58:18,838 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44381,1685530698549, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44381, sessionid=0x101a12b7b230001 2023-05-31 10:58:18,838 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 10:58:18,838 DEBUG [RS:0;jenkins-hbase20:44381] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:18,838 DEBUG [RS:0;jenkins-hbase20:44381] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44381,1685530698549' 2023-05-31 10:58:18,838 DEBUG [RS:0;jenkins-hbase20:44381] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 10:58:18,839 DEBUG [RS:0;jenkins-hbase20:44381] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 10:58:18,839 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 10:58:18,839 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 10:58:18,839 DEBUG [RS:0;jenkins-hbase20:44381] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:18,839 DEBUG [RS:0;jenkins-hbase20:44381] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44381,1685530698549' 2023-05-31 10:58:18,839 DEBUG [RS:0;jenkins-hbase20:44381] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 10:58:18,840 DEBUG [RS:0;jenkins-hbase20:44381] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 10:58:18,840 DEBUG [RS:0;jenkins-hbase20:44381] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 10:58:18,840 INFO [RS:0;jenkins-hbase20:44381] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 10:58:18,840 INFO [RS:0;jenkins-hbase20:44381] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 10:58:18,869 DEBUG [jenkins-hbase20:38971] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 10:58:18,870 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44381,1685530698549, state=OPENING 2023-05-31 10:58:18,871 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 10:58:18,872 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:18,872 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44381,1685530698549}] 2023-05-31 10:58:18,872 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:58:18,943 INFO [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44381%2C1685530698549, suffix=, logDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549, archiveDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs, maxLogs=32 2023-05-31 10:58:18,956 INFO [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549/jenkins-hbase20.apache.org%2C44381%2C1685530698549.1685530698944 2023-05-31 10:58:18,956 DEBUG [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42077,DS-5117327e-e70f-4215-a7bf-0a0d0782767e,DISK], DatanodeInfoWithStorage[127.0.0.1:37257,DS-3644216a-678e-495d-aa92-6891291b43cf,DISK]] 2023-05-31 10:58:19,028 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,028 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 10:58:19,034 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 10:58:19,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 10:58:19,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:58:19,044 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44381%2C1685530698549.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549, archiveDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs, maxLogs=32 2023-05-31 10:58:19,051 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549/jenkins-hbase20.apache.org%2C44381%2C1685530698549.meta.1685530699044.meta 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37257,DS-3644216a-678e-495d-aa92-6891291b43cf,DISK], DatanodeInfoWithStorage[127.0.0.1:42077,DS-5117327e-e70f-4215-a7bf-0a0d0782767e,DISK]] 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 10:58:19,051 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 10:58:19,051 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 10:58:19,053 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 10:58:19,053 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/info 2023-05-31 10:58:19,053 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/info 2023-05-31 10:58:19,054 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 10:58:19,054 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:19,054 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 10:58:19,055 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:58:19,055 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/rep_barrier 2023-05-31 10:58:19,055 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 10:58:19,056 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:19,056 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 10:58:19,056 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/table 2023-05-31 10:58:19,056 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/table 2023-05-31 10:58:19,057 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 10:58:19,057 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:19,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740 2023-05-31 10:58:19,059 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740 2023-05-31 10:58:19,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 10:58:19,062 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 10:58:19,063 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855368, jitterRate=0.08765721321105957}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 10:58:19,063 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 10:58:19,066 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685530699027 2023-05-31 10:58:19,069 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 10:58:19,070 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 10:58:19,070 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44381,1685530698549, state=OPEN 2023-05-31 10:58:19,071 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 10:58:19,071 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 10:58:19,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 10:58:19,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44381,1685530698549 in 199 msec 2023-05-31 10:58:19,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 10:58:19,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 357 msec 2023-05-31 10:58:19,076 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 398 msec 2023-05-31 10:58:19,077 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685530699076, completionTime=-1 2023-05-31 10:58:19,077 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 10:58:19,077 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 10:58:19,080 DEBUG [hconnection-0x44a8dae1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:58:19,082 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:58:19,083 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 10:58:19,083 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685530759083 2023-05-31 10:58:19,083 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685530819083 2023-05-31 10:58:19,083 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 10:58:19,090 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38971,1685530698514-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:19,090 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38971,1685530698514-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:19,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38971,1685530698514-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:19,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:38971, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:19,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 10:58:19,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 10:58:19,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 10:58:19,092 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 10:58:19,092 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 10:58:19,094 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 10:58:19,094 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 10:58:19,096 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/.tmp/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,097 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/.tmp/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852 empty. 2023-05-31 10:58:19,097 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/.tmp/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,097 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 10:58:19,108 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 10:58:19,110 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9536a05661b4f31b4edc59e0f034c852, NAME => 'hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/.tmp 2023-05-31 10:58:19,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:58:19,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9536a05661b4f31b4edc59e0f034c852, disabling compactions & flushes 2023-05-31 10:58:19,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. after waiting 0 ms 2023-05-31 10:58:19,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9536a05661b4f31b4edc59e0f034c852: 2023-05-31 10:58:19,120 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 10:58:19,121 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530699121"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685530699121"}]},"ts":"1685530699121"} 2023-05-31 10:58:19,123 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 10:58:19,124 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 10:58:19,124 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530699124"}]},"ts":"1685530699124"} 2023-05-31 10:58:19,126 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 10:58:19,130 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9536a05661b4f31b4edc59e0f034c852, ASSIGN}] 2023-05-31 10:58:19,133 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9536a05661b4f31b4edc59e0f034c852, ASSIGN 2023-05-31 10:58:19,134 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9536a05661b4f31b4edc59e0f034c852, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44381,1685530698549; forceNewPlan=false, retain=false 2023-05-31 10:58:19,285 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9536a05661b4f31b4edc59e0f034c852, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,285 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530699285"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685530699285"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685530699285"}]},"ts":"1685530699285"} 2023-05-31 10:58:19,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 9536a05661b4f31b4edc59e0f034c852, server=jenkins-hbase20.apache.org,44381,1685530698549}] 2023-05-31 10:58:19,450 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,450 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9536a05661b4f31b4edc59e0f034c852, NAME => 'hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.', STARTKEY => '', ENDKEY => ''} 2023-05-31 10:58:19,450 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 10:58:19,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,453 INFO [StoreOpener-9536a05661b4f31b4edc59e0f034c852-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,456 DEBUG [StoreOpener-9536a05661b4f31b4edc59e0f034c852-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/info 2023-05-31 10:58:19,456 DEBUG [StoreOpener-9536a05661b4f31b4edc59e0f034c852-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/info 2023-05-31 10:58:19,457 INFO [StoreOpener-9536a05661b4f31b4edc59e0f034c852-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9536a05661b4f31b4edc59e0f034c852 columnFamilyName info 2023-05-31 10:58:19,457 INFO [StoreOpener-9536a05661b4f31b4edc59e0f034c852-1] regionserver.HStore(310): Store=9536a05661b4f31b4edc59e0f034c852/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 10:58:19,458 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,458 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,460 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 10:58:19,462 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 9536a05661b4f31b4edc59e0f034c852; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=793930, jitterRate=0.009534373879432678}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 10:58:19,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 9536a05661b4f31b4edc59e0f034c852: 2023-05-31 10:58:19,464 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852., pid=6, masterSystemTime=1685530699443 2023-05-31 10:58:19,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,466 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,466 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9536a05661b4f31b4edc59e0f034c852, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,466 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685530699466"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685530699466"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685530699466"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685530699466"}]},"ts":"1685530699466"} 2023-05-31 10:58:19,470 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 10:58:19,470 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 9536a05661b4f31b4edc59e0f034c852, server=jenkins-hbase20.apache.org,44381,1685530698549 in 180 msec 2023-05-31 10:58:19,472 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 10:58:19,472 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9536a05661b4f31b4edc59e0f034c852, ASSIGN in 341 msec 2023-05-31 10:58:19,473 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 10:58:19,473 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685530699473"}]},"ts":"1685530699473"} 2023-05-31 10:58:19,474 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 10:58:19,476 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 10:58:19,478 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 385 msec 2023-05-31 10:58:19,493 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 10:58:19,498 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:58:19,498 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:19,505 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 10:58:19,514 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:58:19,517 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-31 10:58:19,527 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 10:58:19,536 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 10:58:19,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-05-31 10:58:19,555 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 10:58:19,559 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 10:58:19,559 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.982sec 2023-05-31 10:58:19,559 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 10:58:19,559 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 10:58:19,559 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 10:58:19,560 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38971,1685530698514-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 10:58:19,560 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38971,1685530698514-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 10:58:19,562 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 10:58:19,573 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ReadOnlyZKClient(139): Connect 0x721aeed9 to 127.0.0.1:64821 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 10:58:19,578 DEBUG [Listener at localhost.localdomain/46143] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37771f33, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 10:58:19,586 DEBUG [hconnection-0x43c4130d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 10:58:19,588 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33306, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 10:58:19,590 INFO [Listener at localhost.localdomain/46143] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:19,590 INFO [Listener at localhost.localdomain/46143] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 10:58:19,592 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 10:58:19,592 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:19,593 INFO [Listener at localhost.localdomain/46143] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 10:58:19,593 INFO [Listener at localhost.localdomain/46143] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 10:58:19,595 INFO [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs, maxLogs=32 2023-05-31 10:58:19,599 INFO [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1/test.com%2C8080%2C1.1685530699595 2023-05-31 10:58:19,600 DEBUG [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42077,DS-5117327e-e70f-4215-a7bf-0a0d0782767e,DISK], DatanodeInfoWithStorage[127.0.0.1:37257,DS-3644216a-678e-495d-aa92-6891291b43cf,DISK]] 2023-05-31 10:58:19,610 INFO [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1/test.com%2C8080%2C1.1685530699595 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1/test.com%2C8080%2C1.1685530699600 2023-05-31 10:58:19,610 DEBUG [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37257,DS-3644216a-678e-495d-aa92-6891291b43cf,DISK], DatanodeInfoWithStorage[127.0.0.1:42077,DS-5117327e-e70f-4215-a7bf-0a0d0782767e,DISK]] 2023-05-31 10:58:19,611 DEBUG [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1/test.com%2C8080%2C1.1685530699595 is not closed yet, will try archiving it next time 2023-05-31 10:58:19,611 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1 2023-05-31 10:58:19,619 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/test.com,8080,1/test.com%2C8080%2C1.1685530699595 to hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs/test.com%2C8080%2C1.1685530699595 2023-05-31 10:58:19,621 DEBUG [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs 2023-05-31 10:58:19,621 INFO [Listener at localhost.localdomain/46143] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685530699600) 2023-05-31 10:58:19,621 INFO [Listener at localhost.localdomain/46143] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 10:58:19,622 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x721aeed9 to 127.0.0.1:64821 2023-05-31 10:58:19,622 DEBUG [Listener at localhost.localdomain/46143] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:19,623 DEBUG [Listener at localhost.localdomain/46143] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 10:58:19,623 DEBUG [Listener at localhost.localdomain/46143] util.JVMClusterUtil(257): Found active master hash=683975619, stopped=false 2023-05-31 10:58:19,623 INFO [Listener at localhost.localdomain/46143] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:19,624 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:58:19,624 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 10:58:19,624 INFO [Listener at localhost.localdomain/46143] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 10:58:19,624 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:19,625 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:58:19,625 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 10:58:19,625 DEBUG [Listener at localhost.localdomain/46143] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x17909112 to 127.0.0.1:64821 2023-05-31 10:58:19,625 DEBUG [Listener at localhost.localdomain/46143] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:19,625 INFO [Listener at localhost.localdomain/46143] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44381,1685530698549' ***** 2023-05-31 10:58:19,626 INFO [Listener at localhost.localdomain/46143] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 10:58:19,626 INFO [RS:0;jenkins-hbase20:44381] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 10:58:19,626 INFO [RS:0;jenkins-hbase20:44381] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 10:58:19,626 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 10:58:19,626 INFO [RS:0;jenkins-hbase20:44381] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 10:58:19,627 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(3303): Received CLOSE for 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,627 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,627 DEBUG [RS:0;jenkins-hbase20:44381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28454e98 to 127.0.0.1:64821 2023-05-31 10:58:19,627 DEBUG [RS:0;jenkins-hbase20:44381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:19,627 INFO [RS:0;jenkins-hbase20:44381] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 10:58:19,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 9536a05661b4f31b4edc59e0f034c852, disabling compactions & flushes 2023-05-31 10:58:19,627 INFO [RS:0;jenkins-hbase20:44381] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 10:58:19,627 INFO [RS:0;jenkins-hbase20:44381] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 10:58:19,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,628 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. after waiting 0 ms 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,628 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-31 10:58:19,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 9536a05661b4f31b4edc59e0f034c852 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 10:58:19,628 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1478): Online Regions={9536a05661b4f31b4edc59e0f034c852=hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 10:58:19,628 DEBUG [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1504): Waiting on 1588230740, 9536a05661b4f31b4edc59e0f034c852 2023-05-31 10:58:19,628 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 10:58:19,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 10:58:19,628 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-31 10:58:19,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/.tmp/info/25d22c579d214c608707f70032608db7 2023-05-31 10:58:19,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/.tmp/info/a03efb3490c54d119508b911a197fb0b 2023-05-31 10:58:19,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/.tmp/info/a03efb3490c54d119508b911a197fb0b as hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/info/a03efb3490c54d119508b911a197fb0b 2023-05-31 10:58:19,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/info/a03efb3490c54d119508b911a197fb0b, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 10:58:19,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 9536a05661b4f31b4edc59e0f034c852 in 22ms, sequenceid=6, compaction requested=false 2023-05-31 10:58:19,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 10:58:19,657 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/.tmp/table/e80e74d99e1c4ab2b3bfa03e6c7ec8d9 2023-05-31 10:58:19,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/namespace/9536a05661b4f31b4edc59e0f034c852/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 10:58:19,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 9536a05661b4f31b4edc59e0f034c852: 2023-05-31 10:58:19,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685530699091.9536a05661b4f31b4edc59e0f034c852. 2023-05-31 10:58:19,664 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/.tmp/info/25d22c579d214c608707f70032608db7 as hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/info/25d22c579d214c608707f70032608db7 2023-05-31 10:58:19,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/info/25d22c579d214c608707f70032608db7, entries=10, sequenceid=9, filesize=5.9 K 2023-05-31 10:58:19,668 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/.tmp/table/e80e74d99e1c4ab2b3bfa03e6c7ec8d9 as hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/table/e80e74d99e1c4ab2b3bfa03e6c7ec8d9 2023-05-31 10:58:19,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/table/e80e74d99e1c4ab2b3bfa03e6c7ec8d9, entries=2, sequenceid=9, filesize=4.7 K 2023-05-31 10:58:19,673 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 45ms, sequenceid=9, compaction requested=false 2023-05-31 10:58:19,673 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 10:58:19,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-31 10:58:19,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 10:58:19,681 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 10:58:19,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 10:58:19,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 10:58:19,829 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44381,1685530698549; all regions closed. 2023-05-31 10:58:19,830 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,841 DEBUG [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs 2023-05-31 10:58:19,841 INFO [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C44381%2C1685530698549.meta:.meta(num 1685530699044) 2023-05-31 10:58:19,841 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/WALs/jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,846 DEBUG [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/oldWALs 2023-05-31 10:58:19,846 INFO [RS:0;jenkins-hbase20:44381] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C44381%2C1685530698549:(num 1685530698944) 2023-05-31 10:58:19,846 DEBUG [RS:0;jenkins-hbase20:44381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:19,846 INFO [RS:0;jenkins-hbase20:44381] regionserver.LeaseManager(133): Closed leases 2023-05-31 10:58:19,846 INFO [RS:0;jenkins-hbase20:44381] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 10:58:19,847 INFO [RS:0;jenkins-hbase20:44381] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44381 2023-05-31 10:58:19,849 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:58:19,850 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:58:19,851 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44381,1685530698549 2023-05-31 10:58:19,851 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 10:58:19,851 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44381,1685530698549] 2023-05-31 10:58:19,851 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44381,1685530698549; numProcessing=1 2023-05-31 10:58:19,852 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44381,1685530698549 already deleted, retry=false 2023-05-31 10:58:19,852 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44381,1685530698549 expired; onlineServers=0 2023-05-31 10:58:19,852 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38971,1685530698514' ***** 2023-05-31 10:58:19,852 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 10:58:19,852 DEBUG [M:0;jenkins-hbase20:38971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2cfb2203, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-31 10:58:19,852 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:19,853 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38971,1685530698514; all regions closed. 2023-05-31 10:58:19,853 DEBUG [M:0;jenkins-hbase20:38971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 10:58:19,853 DEBUG [M:0;jenkins-hbase20:38971] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 10:58:19,853 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 10:58:19,853 DEBUG [M:0;jenkins-hbase20:38971] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 10:58:19,853 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530698684] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1685530698684,5,FailOnTimeoutGroup] 2023-05-31 10:58:19,853 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530698684] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1685530698684,5,FailOnTimeoutGroup] 2023-05-31 10:58:19,853 INFO [M:0;jenkins-hbase20:38971] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 10:58:19,854 INFO [M:0;jenkins-hbase20:38971] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 10:58:19,854 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 10:58:19,854 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 10:58:19,854 INFO [M:0;jenkins-hbase20:38971] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-31 10:58:19,854 DEBUG [M:0;jenkins-hbase20:38971] master.HMaster(1512): Stopping service threads 2023-05-31 10:58:19,854 INFO [M:0;jenkins-hbase20:38971] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 10:58:19,854 ERROR [M:0;jenkins-hbase20:38971] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-31 10:58:19,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 10:58:19,855 INFO [M:0;jenkins-hbase20:38971] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 10:58:19,855 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 10:58:19,855 DEBUG [M:0;jenkins-hbase20:38971] zookeeper.ZKUtil(398): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 10:58:19,855 WARN [M:0;jenkins-hbase20:38971] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 10:58:19,855 INFO [M:0;jenkins-hbase20:38971] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 10:58:19,855 INFO [M:0;jenkins-hbase20:38971] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 10:58:19,856 DEBUG [M:0;jenkins-hbase20:38971] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 10:58:19,856 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:19,856 DEBUG [M:0;jenkins-hbase20:38971] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:19,856 DEBUG [M:0;jenkins-hbase20:38971] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 10:58:19,856 DEBUG [M:0;jenkins-hbase20:38971] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:19,856 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-05-31 10:58:19,865 INFO [M:0;jenkins-hbase20:38971] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5e3a42dbe311483f9b5e2c0ffb59e11b 2023-05-31 10:58:19,871 DEBUG [M:0;jenkins-hbase20:38971] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5e3a42dbe311483f9b5e2c0ffb59e11b as hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5e3a42dbe311483f9b5e2c0ffb59e11b 2023-05-31 10:58:19,875 INFO [M:0;jenkins-hbase20:38971] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38283/user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5e3a42dbe311483f9b5e2c0ffb59e11b, entries=8, sequenceid=66, filesize=6.3 K 2023-05-31 10:58:19,876 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=66, compaction requested=false 2023-05-31 10:58:19,878 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 10:58:19,878 DEBUG [M:0;jenkins-hbase20:38971] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 10:58:19,878 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db5d02a8-f4d3-bfe8-9c2d-ba72f88d3110/MasterData/WALs/jenkins-hbase20.apache.org,38971,1685530698514 2023-05-31 10:58:19,881 INFO [M:0;jenkins-hbase20:38971] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 10:58:19,881 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 10:58:19,881 INFO [M:0;jenkins-hbase20:38971] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38971 2023-05-31 10:58:19,883 DEBUG [M:0;jenkins-hbase20:38971] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,38971,1685530698514 already deleted, retry=false 2023-05-31 10:58:20,031 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:20,031 INFO [M:0;jenkins-hbase20:38971] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38971,1685530698514; zookeeper connection closed. 2023-05-31 10:58:20,031 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): master:38971-0x101a12b7b230000, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:20,131 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:20,131 INFO [RS:0;jenkins-hbase20:44381] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44381,1685530698549; zookeeper connection closed. 2023-05-31 10:58:20,131 DEBUG [Listener at localhost.localdomain/46143-EventThread] zookeeper.ZKWatcher(600): regionserver:44381-0x101a12b7b230001, quorum=127.0.0.1:64821, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 10:58:20,133 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@448e74aa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@448e74aa 2023-05-31 10:58:20,134 INFO [Listener at localhost.localdomain/46143] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 10:58:20,134 WARN [Listener at localhost.localdomain/46143] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:58:20,144 INFO [Listener at localhost.localdomain/46143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:58:20,150 WARN [BP-1213116859-148.251.75.209-1685530698023 heartbeating to localhost.localdomain/127.0.0.1:38283] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:58:20,150 WARN [BP-1213116859-148.251.75.209-1685530698023 heartbeating to localhost.localdomain/127.0.0.1:38283] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1213116859-148.251.75.209-1685530698023 (Datanode Uuid 9994f6d1-f73d-4389-9d94-3f6c2f6f034f) service to localhost.localdomain/127.0.0.1:38283 2023-05-31 10:58:20,151 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/dfs/data/data3/current/BP-1213116859-148.251.75.209-1685530698023] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:20,151 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/dfs/data/data4/current/BP-1213116859-148.251.75.209-1685530698023] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:20,152 WARN [Listener at localhost.localdomain/46143] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 10:58:20,154 INFO [Listener at localhost.localdomain/46143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 10:58:20,260 WARN [BP-1213116859-148.251.75.209-1685530698023 heartbeating to localhost.localdomain/127.0.0.1:38283] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 10:58:20,260 WARN [BP-1213116859-148.251.75.209-1685530698023 heartbeating to localhost.localdomain/127.0.0.1:38283] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1213116859-148.251.75.209-1685530698023 (Datanode Uuid c23a5858-0b2f-4610-a543-7e81cd1d39e2) service to localhost.localdomain/127.0.0.1:38283 2023-05-31 10:58:20,262 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/dfs/data/data1/current/BP-1213116859-148.251.75.209-1685530698023] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:20,263 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/8d934caa-9378-6585-5500-17628779eee5/cluster_379bf399-0d59-f101-c9c9-9d92adf07918/dfs/data/data2/current/BP-1213116859-148.251.75.209-1685530698023] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 10:58:20,275 INFO [Listener at localhost.localdomain/46143] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 10:58:20,391 INFO [Listener at localhost.localdomain/46143] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 10:58:20,402 INFO [Listener at localhost.localdomain/46143] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 10:58:20,411 INFO [Listener at localhost.localdomain/46143] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=131 (was 107) - Thread LEAK? -, OpenFileDescriptor=558 (was 531) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=43 (was 47), ProcessCount=165 (was 165), AvailableMemoryMB=7809 (was 7819)