2023-06-08 18:54:01,986 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f 2023-06-08 18:54:01,999 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-08 18:54:02,034 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=423, ProcessCount=186, AvailableMemoryMB=2105 2023-06-08 18:54:02,040 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 18:54:02,041 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5, deleteOnExit=true 2023-06-08 18:54:02,041 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 18:54:02,042 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/test.cache.data in system properties and HBase conf 2023-06-08 18:54:02,042 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 18:54:02,043 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/hadoop.log.dir in system properties and HBase conf 2023-06-08 18:54:02,043 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 18:54:02,043 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 18:54:02,044 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 18:54:02,132 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-08 18:54:02,438 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 18:54:02,444 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:54:02,444 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:54:02,445 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 18:54:02,446 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:54:02,446 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 18:54:02,447 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 18:54:02,447 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:54:02,448 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:54:02,448 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 18:54:02,448 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/nfs.dump.dir in system properties and HBase conf 2023-06-08 18:54:02,449 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/java.io.tmpdir in system properties and HBase conf 2023-06-08 18:54:02,449 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:54:02,450 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 18:54:02,450 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 18:54:02,970 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:54:02,982 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:54:02,986 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:54:03,204 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-08 18:54:03,373 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-08 18:54:03,389 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:54:03,418 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:54:03,444 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/java.io.tmpdir/Jetty_localhost_localdomain_38409_hdfs____zc8xw2/webapp 2023-06-08 18:54:03,575 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38409 2023-06-08 18:54:03,582 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:54:03,584 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:54:03,584 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:54:04,001 WARN [Listener at localhost.localdomain/44823] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:54:04,088 WARN [Listener at localhost.localdomain/44823] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:54:04,110 WARN [Listener at localhost.localdomain/44823] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:54:04,117 INFO [Listener at localhost.localdomain/44823] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:54:04,122 INFO [Listener at localhost.localdomain/44823] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/java.io.tmpdir/Jetty_localhost_37693_datanode____.8k37sd/webapp 2023-06-08 18:54:04,198 INFO [Listener at localhost.localdomain/44823] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37693 2023-06-08 18:54:04,531 WARN [Listener at localhost.localdomain/36043] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:54:04,542 WARN [Listener at localhost.localdomain/36043] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:54:04,546 WARN [Listener at localhost.localdomain/36043] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:54:04,548 INFO [Listener at localhost.localdomain/36043] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:54:04,552 INFO [Listener at localhost.localdomain/36043] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/java.io.tmpdir/Jetty_localhost_44037_datanode____.8nh7gl/webapp 2023-06-08 18:54:04,637 INFO [Listener at localhost.localdomain/36043] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44037 2023-06-08 18:54:04,646 WARN [Listener at localhost.localdomain/35315] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:54:04,907 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x86d2042489c98c6c: Processing first storage report for DS-f4768888-4875-4f84-b58d-1a3cdac79535 from datanode a77ec2f1-4fc8-4f67-8577-7f26fd566852 2023-06-08 18:54:04,908 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x86d2042489c98c6c: from storage DS-f4768888-4875-4f84-b58d-1a3cdac79535 node DatanodeRegistration(127.0.0.1:41015, datanodeUuid=a77ec2f1-4fc8-4f67-8577-7f26fd566852, infoPort=39017, infoSecurePort=0, ipcPort=36043, storageInfo=lv=-57;cid=testClusterID;nsid=1596952195;c=1686250443048), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:54:04,909 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4f10b9d4d9fe87a: Processing first storage report for DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6 from datanode 945f8a5a-92f5-47d5-8095-96ade6fa03e0 2023-06-08 18:54:04,909 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4f10b9d4d9fe87a: from storage DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6 node DatanodeRegistration(127.0.0.1:40843, datanodeUuid=945f8a5a-92f5-47d5-8095-96ade6fa03e0, infoPort=40951, infoSecurePort=0, ipcPort=35315, storageInfo=lv=-57;cid=testClusterID;nsid=1596952195;c=1686250443048), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:54:04,909 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x86d2042489c98c6c: Processing first storage report for DS-fa78841e-1283-4fd3-a323-42f4d91ef9e2 from datanode a77ec2f1-4fc8-4f67-8577-7f26fd566852 2023-06-08 18:54:04,909 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x86d2042489c98c6c: from storage DS-fa78841e-1283-4fd3-a323-42f4d91ef9e2 node DatanodeRegistration(127.0.0.1:41015, datanodeUuid=a77ec2f1-4fc8-4f67-8577-7f26fd566852, infoPort=39017, infoSecurePort=0, ipcPort=36043, storageInfo=lv=-57;cid=testClusterID;nsid=1596952195;c=1686250443048), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:54:04,909 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4f10b9d4d9fe87a: Processing first storage report for DS-72a8e7ff-71e7-43b3-a6de-f7c7112e3f13 from datanode 945f8a5a-92f5-47d5-8095-96ade6fa03e0 2023-06-08 18:54:04,909 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4f10b9d4d9fe87a: from storage DS-72a8e7ff-71e7-43b3-a6de-f7c7112e3f13 node DatanodeRegistration(127.0.0.1:40843, datanodeUuid=945f8a5a-92f5-47d5-8095-96ade6fa03e0, infoPort=40951, infoSecurePort=0, ipcPort=35315, storageInfo=lv=-57;cid=testClusterID;nsid=1596952195;c=1686250443048), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:54:04,986 DEBUG [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f 2023-06-08 18:54:05,051 INFO [Listener at localhost.localdomain/35315] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/zookeeper_0, clientPort=53627, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 18:54:05,062 INFO [Listener at localhost.localdomain/35315] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53627 2023-06-08 18:54:05,070 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:05,071 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:05,686 INFO [Listener at localhost.localdomain/35315] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184 with version=8 2023-06-08 18:54:05,687 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase-staging 2023-06-08 18:54:05,956 INFO [Listener at localhost.localdomain/35315] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-08 18:54:06,335 INFO [Listener at localhost.localdomain/35315] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:54:06,361 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:54:06,361 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:54:06,361 INFO [Listener at localhost.localdomain/35315] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:54:06,361 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:54:06,362 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:54:06,479 INFO [Listener at localhost.localdomain/35315] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:54:06,543 DEBUG [Listener at localhost.localdomain/35315] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-08 18:54:06,664 INFO [Listener at localhost.localdomain/35315] ipc.NettyRpcServer(120): Bind to /136.243.18.41:35461 2023-06-08 18:54:06,676 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:06,679 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:06,706 INFO [Listener at localhost.localdomain/35315] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35461 connecting to ZooKeeper ensemble=127.0.0.1:53627 2023-06-08 18:54:06,764 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:354610x0, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:54:06,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35461-0x100abc9540c0000 connected 2023-06-08 18:54:06,789 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:54:06,790 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:54:06,793 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:54:06,800 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35461 2023-06-08 18:54:06,800 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35461 2023-06-08 18:54:06,801 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35461 2023-06-08 18:54:06,801 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35461 2023-06-08 18:54:06,801 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35461 2023-06-08 18:54:06,808 INFO [Listener at localhost.localdomain/35315] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184, hbase.cluster.distributed=false 2023-06-08 18:54:06,880 INFO [Listener at localhost.localdomain/35315] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:54:06,880 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:54:06,881 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:54:06,881 INFO [Listener at localhost.localdomain/35315] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:54:06,881 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:54:06,881 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:54:06,888 INFO [Listener at localhost.localdomain/35315] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:54:06,892 INFO [Listener at localhost.localdomain/35315] ipc.NettyRpcServer(120): Bind to /136.243.18.41:40985 2023-06-08 18:54:06,894 INFO [Listener at localhost.localdomain/35315] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:54:06,900 DEBUG [Listener at localhost.localdomain/35315] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:54:06,902 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:06,905 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:06,906 INFO [Listener at localhost.localdomain/35315] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40985 connecting to ZooKeeper ensemble=127.0.0.1:53627 2023-06-08 18:54:06,910 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:409850x0, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:54:06,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40985-0x100abc9540c0001 connected 2023-06-08 18:54:06,912 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:54:06,913 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:54:06,917 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:54:06,920 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40985 2023-06-08 18:54:06,920 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40985 2023-06-08 18:54:06,921 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40985 2023-06-08 18:54:06,924 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40985 2023-06-08 18:54:06,925 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40985 2023-06-08 18:54:06,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:06,941 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:54:06,942 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:06,961 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:54:06,961 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:54:06,961 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:06,962 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:54:06,963 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,35461,1686250445812 from backup master directory 2023-06-08 18:54:06,963 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:54:06,965 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:06,966 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:54:06,966 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:54:06,966 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:06,969 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-08 18:54:06,970 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-08 18:54:07,052 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase.id with ID: 1142a2a6-2ee1-4de7-85a1-3042ff1054d0 2023-06-08 18:54:07,105 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:07,126 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:07,178 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x114ee5b8 to 127.0.0.1:53627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:54:07,210 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12483af2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:54:07,232 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:54:07,234 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 18:54:07,243 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:54:07,273 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store-tmp 2023-06-08 18:54:07,301 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:07,302 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:54:07,302 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:54:07,302 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:54:07,302 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:54:07,302 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:54:07,302 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:54:07,302 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:54:07,304 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/WALs/jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:07,329 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35461%2C1686250445812, suffix=, logDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/WALs/jenkins-hbase17.apache.org,35461,1686250445812, archiveDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/oldWALs, maxLogs=10 2023-06-08 18:54:07,359 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:54:07,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/WALs/jenkins-hbase17.apache.org,35461,1686250445812/jenkins-hbase17.apache.org%2C35461%2C1686250445812.1686250447356 2023-06-08 18:54:07,387 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:07,387 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:54:07,388 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:07,390 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:54:07,391 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:54:07,447 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:54:07,454 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 18:54:07,476 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 18:54:07,489 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:07,494 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:54:07,496 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:54:07,511 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:54:07,523 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:54:07,524 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=711619, jitterRate=-0.09513059258460999}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:54:07,525 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:54:07,526 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 18:54:07,543 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 18:54:07,544 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 18:54:07,546 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 18:54:07,547 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-08 18:54:07,578 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 30 msec 2023-06-08 18:54:07,579 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 18:54:07,611 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 18:54:07,618 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 18:54:07,663 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 18:54:07,668 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 18:54:07,671 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 18:54:07,676 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 18:54:07,680 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 18:54:07,682 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:07,684 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 18:54:07,684 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 18:54:07,694 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 18:54:07,698 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:54:07,698 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:54:07,698 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:07,699 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,35461,1686250445812, sessionid=0x100abc9540c0000, setting cluster-up flag (Was=false) 2023-06-08 18:54:07,711 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:07,714 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 18:54:07,716 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:07,719 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:07,723 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 18:54:07,724 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:07,726 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.hbase-snapshot/.tmp 2023-06-08 18:54:07,731 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(951): ClusterId : 1142a2a6-2ee1-4de7-85a1-3042ff1054d0 2023-06-08 18:54:07,734 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:54:07,738 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:54:07,739 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:54:07,741 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:54:07,743 DEBUG [RS:0;jenkins-hbase17:40985] zookeeper.ReadOnlyZKClient(139): Connect 0x1a5b4f7a to 127.0.0.1:53627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:54:07,748 DEBUG [RS:0;jenkins-hbase17:40985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@149cb241, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:54:07,749 DEBUG [RS:0;jenkins-hbase17:40985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10a3240a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:54:07,782 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:40985 2023-06-08 18:54:07,787 INFO [RS:0;jenkins-hbase17:40985] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:54:07,787 INFO [RS:0;jenkins-hbase17:40985] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:54:07,787 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:54:07,790 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,35461,1686250445812 with isa=jenkins-hbase17.apache.org/136.243.18.41:40985, startcode=1686250446879 2023-06-08 18:54:07,807 DEBUG [RS:0;jenkins-hbase17:40985] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:54:07,863 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 18:54:07,873 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:54:07,873 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:54:07,874 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:54:07,874 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:54:07,874 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-06-08 18:54:07,874 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:07,874 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:54:07,875 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:07,876 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686250477876 2023-06-08 18:54:07,878 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 18:54:07,884 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:54:07,886 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 18:54:07,891 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:54:07,892 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 18:54:07,900 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 18:54:07,901 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 18:54:07,901 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 18:54:07,901 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 18:54:07,904 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:07,905 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 18:54:07,908 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 18:54:07,908 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 18:54:07,911 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 18:54:07,911 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 18:54:07,913 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250447913,5,FailOnTimeoutGroup] 2023-06-08 18:54:07,918 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250447913,5,FailOnTimeoutGroup] 2023-06-08 18:54:07,918 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:07,918 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 18:54:07,919 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:48327, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:54:07,920 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:07,921 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:07,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:07,942 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:54:07,943 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:54:07,943 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184 2023-06-08 18:54:07,965 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:07,967 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184 2023-06-08 18:54:07,967 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:44823 2023-06-08 18:54:07,967 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:54:07,972 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:54:07,973 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:54:07,973 DEBUG [RS:0;jenkins-hbase17:40985] zookeeper.ZKUtil(162): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:07,974 WARN [RS:0;jenkins-hbase17:40985] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:54:07,974 INFO [RS:0;jenkins-hbase17:40985] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:54:07,974 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:07,975 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,40985,1686250446879] 2023-06-08 18:54:07,977 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/info 2023-06-08 18:54:07,977 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:54:07,979 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:07,980 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:54:07,983 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:54:07,984 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:54:07,985 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:07,985 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:54:07,986 DEBUG [RS:0;jenkins-hbase17:40985] zookeeper.ZKUtil(162): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:07,988 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/table 2023-06-08 18:54:07,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:54:07,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:07,991 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740 2023-06-08 18:54:07,992 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740 2023-06-08 18:54:07,996 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:54:07,998 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:54:07,998 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:54:08,001 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:54:08,003 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=765199, jitterRate=-0.026999235153198242}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:54:08,003 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:54:08,003 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:54:08,004 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:54:08,004 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:54:08,004 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:54:08,004 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:54:08,005 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:54:08,005 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:54:08,008 INFO [RS:0;jenkins-hbase17:40985] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:54:08,011 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:54:08,011 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 18:54:08,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 18:54:08,028 INFO [RS:0;jenkins-hbase17:40985] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:54:08,031 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 18:54:08,033 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 18:54:08,034 INFO [RS:0;jenkins-hbase17:40985] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:54:08,034 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,035 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:54:08,041 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:54:08,042 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,043 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,043 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,043 DEBUG [RS:0;jenkins-hbase17:40985] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:54:08,044 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,044 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,044 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,058 INFO [RS:0;jenkins-hbase17:40985] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:54:08,060 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,40985,1686250446879-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,073 INFO [RS:0;jenkins-hbase17:40985] regionserver.Replication(203): jenkins-hbase17.apache.org,40985,1686250446879 started 2023-06-08 18:54:08,073 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,40985,1686250446879, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:40985, sessionid=0x100abc9540c0001 2023-06-08 18:54:08,073 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:54:08,073 DEBUG [RS:0;jenkins-hbase17:40985] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:08,073 DEBUG [RS:0;jenkins-hbase17:40985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40985,1686250446879' 2023-06-08 18:54:08,073 DEBUG [RS:0;jenkins-hbase17:40985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:54:08,074 DEBUG [RS:0;jenkins-hbase17:40985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:54:08,074 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:54:08,075 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:54:08,075 DEBUG [RS:0;jenkins-hbase17:40985] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:08,075 DEBUG [RS:0;jenkins-hbase17:40985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,40985,1686250446879' 2023-06-08 18:54:08,075 DEBUG [RS:0;jenkins-hbase17:40985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:54:08,075 DEBUG [RS:0;jenkins-hbase17:40985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:54:08,076 DEBUG [RS:0;jenkins-hbase17:40985] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:54:08,076 INFO [RS:0;jenkins-hbase17:40985] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:54:08,076 INFO [RS:0;jenkins-hbase17:40985] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:54:08,185 INFO [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40985%2C1686250446879, suffix=, logDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879, archiveDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/oldWALs, maxLogs=32 2023-06-08 18:54:08,186 DEBUG [jenkins-hbase17:35461] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 18:54:08,189 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,40985,1686250446879, state=OPENING 2023-06-08 18:54:08,196 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 18:54:08,197 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:08,198 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:54:08,203 INFO [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250448189 2023-06-08 18:54:08,203 DEBUG [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:08,204 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,40985,1686250446879}] 2023-06-08 18:54:08,388 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:08,391 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:54:08,394 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:46212, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:54:08,406 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 18:54:08,407 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:54:08,410 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C40985%2C1686250446879.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879, archiveDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/oldWALs, maxLogs=32 2023-06-08 18:54:08,423 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.meta.1686250448412.meta 2023-06-08 18:54:08,423 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:54:08,424 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:54:08,425 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 18:54:08,440 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 18:54:08,444 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 18:54:08,449 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 18:54:08,449 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:08,449 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 18:54:08,449 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 18:54:08,451 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:54:08,454 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/info 2023-06-08 18:54:08,454 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/info 2023-06-08 18:54:08,454 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:54:08,455 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:08,455 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:54:08,456 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:54:08,456 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:54:08,457 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:54:08,458 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:08,458 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:54:08,459 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/table 2023-06-08 18:54:08,459 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/table 2023-06-08 18:54:08,460 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:54:08,460 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:08,462 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740 2023-06-08 18:54:08,465 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740 2023-06-08 18:54:08,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:54:08,470 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:54:08,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835762, jitterRate=0.06272712349891663}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:54:08,471 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:54:08,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686250448381 2023-06-08 18:54:08,497 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 18:54:08,497 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 18:54:08,498 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,40985,1686250446879, state=OPEN 2023-06-08 18:54:08,500 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 18:54:08,500 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:54:08,507 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 18:54:08,507 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,40985,1686250446879 in 296 msec 2023-06-08 18:54:08,514 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 18:54:08,514 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 488 msec 2023-06-08 18:54:08,520 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 727 msec 2023-06-08 18:54:08,520 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686250448520, completionTime=-1 2023-06-08 18:54:08,520 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 18:54:08,521 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 18:54:08,576 DEBUG [hconnection-0x5a29c902-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:54:08,579 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:46216, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:54:08,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 18:54:08,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686250508594 2023-06-08 18:54:08,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686250568594 2023-06-08 18:54:08,594 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-06-08 18:54:08,615 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35461,1686250445812-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,615 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35461,1686250445812-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,616 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35461,1686250445812-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,617 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:35461, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,617 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 18:54:08,623 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 18:54:08,629 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 18:54:08,630 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:54:08,638 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 18:54:08,640 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:54:08,644 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:54:08,666 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/hbase/namespace/84f7c33633824224e98661f9285d2447 2023-06-08 18:54:08,668 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/hbase/namespace/84f7c33633824224e98661f9285d2447 empty. 2023-06-08 18:54:08,669 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/hbase/namespace/84f7c33633824224e98661f9285d2447 2023-06-08 18:54:08,669 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 18:54:08,722 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 18:54:08,725 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 84f7c33633824224e98661f9285d2447, NAME => 'hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp 2023-06-08 18:54:08,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:08,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 84f7c33633824224e98661f9285d2447, disabling compactions & flushes 2023-06-08 18:54:08,746 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:08,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:08,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. after waiting 0 ms 2023-06-08 18:54:08,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:08,747 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:08,747 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 84f7c33633824224e98661f9285d2447: 2023-06-08 18:54:08,754 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:54:08,766 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250448756"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250448756"}]},"ts":"1686250448756"} 2023-06-08 18:54:08,788 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:54:08,790 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:54:08,795 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250448791"}]},"ts":"1686250448791"} 2023-06-08 18:54:08,800 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 18:54:08,807 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=84f7c33633824224e98661f9285d2447, ASSIGN}] 2023-06-08 18:54:08,810 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=84f7c33633824224e98661f9285d2447, ASSIGN 2023-06-08 18:54:08,812 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=84f7c33633824224e98661f9285d2447, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40985,1686250446879; forceNewPlan=false, retain=false 2023-06-08 18:54:08,965 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=84f7c33633824224e98661f9285d2447, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:08,965 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250448964"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250448964"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250448964"}]},"ts":"1686250448964"} 2023-06-08 18:54:08,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 84f7c33633824224e98661f9285d2447, server=jenkins-hbase17.apache.org,40985,1686250446879}] 2023-06-08 18:54:09,146 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:09,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 84f7c33633824224e98661f9285d2447, NAME => 'hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:54:09,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:09,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,151 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,155 INFO [StoreOpener-84f7c33633824224e98661f9285d2447-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,159 DEBUG [StoreOpener-84f7c33633824224e98661f9285d2447-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/info 2023-06-08 18:54:09,159 DEBUG [StoreOpener-84f7c33633824224e98661f9285d2447-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/info 2023-06-08 18:54:09,160 INFO [StoreOpener-84f7c33633824224e98661f9285d2447-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 84f7c33633824224e98661f9285d2447 columnFamilyName info 2023-06-08 18:54:09,161 INFO [StoreOpener-84f7c33633824224e98661f9285d2447-1] regionserver.HStore(310): Store=84f7c33633824224e98661f9285d2447/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:09,164 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 84f7c33633824224e98661f9285d2447 2023-06-08 18:54:09,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:54:09,176 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 84f7c33633824224e98661f9285d2447; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=808217, jitterRate=0.027701541781425476}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:54:09,177 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 84f7c33633824224e98661f9285d2447: 2023-06-08 18:54:09,179 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447., pid=6, masterSystemTime=1686250449133 2023-06-08 18:54:09,183 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:09,183 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:54:09,184 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=84f7c33633824224e98661f9285d2447, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:09,185 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250449184"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250449184"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250449184"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250449184"}]},"ts":"1686250449184"} 2023-06-08 18:54:09,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 18:54:09,193 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 84f7c33633824224e98661f9285d2447, server=jenkins-hbase17.apache.org,40985,1686250446879 in 212 msec 2023-06-08 18:54:09,196 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 18:54:09,197 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=84f7c33633824224e98661f9285d2447, ASSIGN in 385 msec 2023-06-08 18:54:09,198 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:54:09,199 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250449199"}]},"ts":"1686250449199"} 2023-06-08 18:54:09,203 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 18:54:09,207 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:54:09,210 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 576 msec 2023-06-08 18:54:09,241 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 18:54:09,242 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:54:09,242 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:09,277 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 18:54:09,296 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:54:09,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 33 msec 2023-06-08 18:54:09,312 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 18:54:09,328 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:54:09,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 20 msec 2023-06-08 18:54:09,350 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:54:09,351 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 18:54:09,352 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.384sec 2023-06-08 18:54:09,354 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 18:54:09,356 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 18:54:09,356 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 18:54:09,357 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35461,1686250445812-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 18:54:09,358 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35461,1686250445812-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 18:54:09,369 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 18:54:09,441 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ReadOnlyZKClient(139): Connect 0x310229f0 to 127.0.0.1:53627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:54:09,445 DEBUG [Listener at localhost.localdomain/35315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@212d6e78, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:54:09,457 DEBUG [hconnection-0x4933cba-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:54:09,467 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:46228, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:54:09,477 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:54:09,478 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:54:09,488 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 18:54:09,488 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:54:09,489 INFO [Listener at localhost.localdomain/35315] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 18:54:09,502 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 18:54:09,507 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:54896, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 18:54:09,518 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 18:54:09,519 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 18:54:09,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:54:09,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-08 18:54:09,530 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:54:09,533 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:54:09,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-08 18:54:09,538 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,539 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d empty. 2023-06-08 18:54:09,543 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,543 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-08 18:54:09,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:54:09,578 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-08 18:54:09,580 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6acffd80fe7928f97db4ca219d35d0d, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/.tmp 2023-06-08 18:54:09,596 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:09,596 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing f6acffd80fe7928f97db4ca219d35d0d, disabling compactions & flushes 2023-06-08 18:54:09,596 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,596 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,596 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. after waiting 0 ms 2023-06-08 18:54:09,596 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,596 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,596 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:54:09,601 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:54:09,603 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686250449602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250449602"}]},"ts":"1686250449602"} 2023-06-08 18:54:09,606 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:54:09,607 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:54:09,608 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250449607"}]},"ts":"1686250449607"} 2023-06-08 18:54:09,610 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-08 18:54:09,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=f6acffd80fe7928f97db4ca219d35d0d, ASSIGN}] 2023-06-08 18:54:09,623 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=f6acffd80fe7928f97db4ca219d35d0d, ASSIGN 2023-06-08 18:54:09,625 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=f6acffd80fe7928f97db4ca219d35d0d, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,40985,1686250446879; forceNewPlan=false, retain=false 2023-06-08 18:54:09,778 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f6acffd80fe7928f97db4ca219d35d0d, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:09,779 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686250449777"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250449777"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250449777"}]},"ts":"1686250449777"} 2023-06-08 18:54:09,785 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure f6acffd80fe7928f97db4ca219d35d0d, server=jenkins-hbase17.apache.org,40985,1686250446879}] 2023-06-08 18:54:09,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6acffd80fe7928f97db4ca219d35d0d, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:54:09,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:54:09,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,956 INFO [StoreOpener-f6acffd80fe7928f97db4ca219d35d0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,958 DEBUG [StoreOpener-f6acffd80fe7928f97db4ca219d35d0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info 2023-06-08 18:54:09,958 DEBUG [StoreOpener-f6acffd80fe7928f97db4ca219d35d0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info 2023-06-08 18:54:09,959 INFO [StoreOpener-f6acffd80fe7928f97db4ca219d35d0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6acffd80fe7928f97db4ca219d35d0d columnFamilyName info 2023-06-08 18:54:09,960 INFO [StoreOpener-f6acffd80fe7928f97db4ca219d35d0d-1] regionserver.HStore(310): Store=f6acffd80fe7928f97db4ca219d35d0d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:54:09,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:09,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:54:09,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened f6acffd80fe7928f97db4ca219d35d0d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=708772, jitterRate=-0.09874999523162842}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:54:09,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:54:09,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d., pid=11, masterSystemTime=1686250449941 2023-06-08 18:54:09,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:09,977 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f6acffd80fe7928f97db4ca219d35d0d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:54:09,977 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686250449976"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250449976"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250449976"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250449976"}]},"ts":"1686250449976"} 2023-06-08 18:54:09,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 18:54:09,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure f6acffd80fe7928f97db4ca219d35d0d, server=jenkins-hbase17.apache.org,40985,1686250446879 in 195 msec 2023-06-08 18:54:09,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 18:54:09,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=f6acffd80fe7928f97db4ca219d35d0d, ASSIGN in 362 msec 2023-06-08 18:54:09,989 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:54:09,989 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250449989"}]},"ts":"1686250449989"} 2023-06-08 18:54:09,991 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-08 18:54:09,994 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:54:09,997 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 470 msec 2023-06-08 18:54:13,958 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-08 18:54:14,024 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 18:54:14,025 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 18:54:14,026 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-08 18:54:15,953 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 18:54:15,953 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-08 18:54:19,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35461] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:54:19,565 INFO [Listener at localhost.localdomain/35315] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-08 18:54:19,568 DEBUG [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-08 18:54:19,569 DEBUG [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:54:31,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40985] regionserver.HRegion(9158): Flush requested on f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:31,607 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f6acffd80fe7928f97db4ca219d35d0d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:54:31,699 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/379a81ff084a401f91b8e5a832a4f997 2023-06-08 18:54:31,751 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/379a81ff084a401f91b8e5a832a4f997 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997 2023-06-08 18:54:31,765 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997, entries=7, sequenceid=11, filesize=12.1 K 2023-06-08 18:54:31,768 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for f6acffd80fe7928f97db4ca219d35d0d in 161ms, sequenceid=11, compaction requested=false 2023-06-08 18:54:31,770 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:54:39,821 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:42,025 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:44,231 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:46,436 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:46,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40985] regionserver.HRegion(9158): Flush requested on f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:54:46,437 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f6acffd80fe7928f97db4ca219d35d0d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:54:46,639 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:46,656 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/7179c815761b479299d4392f48392dd7 2023-06-08 18:54:46,672 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/7179c815761b479299d4392f48392dd7 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/7179c815761b479299d4392f48392dd7 2023-06-08 18:54:46,697 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/7179c815761b479299d4392f48392dd7, entries=7, sequenceid=21, filesize=12.1 K 2023-06-08 18:54:46,898 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:46,899 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for f6acffd80fe7928f97db4ca219d35d0d in 462ms, sequenceid=21, compaction requested=false 2023-06-08 18:54:46,899 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:54:46,899 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-08 18:54:46,900 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:54:46,901 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997 because midkey is the same as first or last row 2023-06-08 18:54:48,640 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:50,843 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:50,844 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C40985%2C1686250446879:(num 1686250448189) roll requested 2023-06-08 18:54:50,844 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:51,056 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:54:51,058 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250448189 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250490844 2023-06-08 18:54:51,059 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:54:51,059 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250448189 is not closed yet, will try archiving it next time 2023-06-08 18:55:00,860 INFO [Listener at localhost.localdomain/35315] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-08 18:55:05,865 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:55:05,865 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:55:05,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40985] regionserver.HRegion(9158): Flush requested on f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:55:05,865 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C40985%2C1686250446879:(num 1686250490844) roll requested 2023-06-08 18:55:05,866 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f6acffd80fe7928f97db4ca219d35d0d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:55:07,868 INFO [Listener at localhost.localdomain/35315] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-08 18:55:10,868 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:55:10,868 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:55:10,883 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:55:10,883 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK], DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK]] 2023-06-08 18:55:10,885 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250490844 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250505866 2023-06-08 18:55:10,885 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41015,DS-f4768888-4875-4f84-b58d-1a3cdac79535,DISK], DatanodeInfoWithStorage[127.0.0.1:40843,DS-59d0442d-94c6-4f42-bb54-fa205bb7c2e6,DISK]] 2023-06-08 18:55:10,885 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879/jenkins-hbase17.apache.org%2C40985%2C1686250446879.1686250490844 is not closed yet, will try archiving it next time 2023-06-08 18:55:10,895 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/0d6ba27a4eab41e39fb60e03ec79b3ca 2023-06-08 18:55:10,909 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/0d6ba27a4eab41e39fb60e03ec79b3ca as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/0d6ba27a4eab41e39fb60e03ec79b3ca 2023-06-08 18:55:10,920 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/0d6ba27a4eab41e39fb60e03ec79b3ca, entries=7, sequenceid=31, filesize=12.1 K 2023-06-08 18:55:10,923 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for f6acffd80fe7928f97db4ca219d35d0d in 5058ms, sequenceid=31, compaction requested=true 2023-06-08 18:55:10,924 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:55:10,924 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-08 18:55:10,924 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:55:10,924 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997 because midkey is the same as first or last row 2023-06-08 18:55:10,926 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:55:10,927 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:55:10,932 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:55:10,934 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.HStore(1912): f6acffd80fe7928f97db4ca219d35d0d/info is initiating minor compaction (all files) 2023-06-08 18:55:10,934 INFO [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f6acffd80fe7928f97db4ca219d35d0d/info in TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:55:10,935 INFO [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/7179c815761b479299d4392f48392dd7, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/0d6ba27a4eab41e39fb60e03ec79b3ca] into tmpdir=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp, totalSize=36.3 K 2023-06-08 18:55:10,936 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] compactions.Compactor(207): Compacting 379a81ff084a401f91b8e5a832a4f997, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686250459575 2023-06-08 18:55:10,938 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] compactions.Compactor(207): Compacting 7179c815761b479299d4392f48392dd7, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1686250473609 2023-06-08 18:55:10,939 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] compactions.Compactor(207): Compacting 0d6ba27a4eab41e39fb60e03ec79b3ca, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1686250488439 2023-06-08 18:55:10,979 INFO [RS:0;jenkins-hbase17:40985-shortCompactions-0] throttle.PressureAwareThroughputController(145): f6acffd80fe7928f97db4ca219d35d0d#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:55:11,014 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/648d6f0dde45450f8348a1e65d65d8e0 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/648d6f0dde45450f8348a1e65d65d8e0 2023-06-08 18:55:11,071 INFO [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f6acffd80fe7928f97db4ca219d35d0d/info of f6acffd80fe7928f97db4ca219d35d0d into 648d6f0dde45450f8348a1e65d65d8e0(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:55:11,071 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:55:11,071 INFO [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d., storeName=f6acffd80fe7928f97db4ca219d35d0d/info, priority=13, startTime=1686250510926; duration=0sec 2023-06-08 18:55:11,073 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-08 18:55:11,074 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:55:11,075 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/648d6f0dde45450f8348a1e65d65d8e0 because midkey is the same as first or last row 2023-06-08 18:55:11,075 DEBUG [RS:0;jenkins-hbase17:40985-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:55:22,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40985] regionserver.HRegion(9158): Flush requested on f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:55:22,990 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f6acffd80fe7928f97db4ca219d35d0d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:55:23,030 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/07acf244801c4f9e964ea6fb34f6a75d 2023-06-08 18:55:23,042 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/07acf244801c4f9e964ea6fb34f6a75d as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/07acf244801c4f9e964ea6fb34f6a75d 2023-06-08 18:55:23,056 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/07acf244801c4f9e964ea6fb34f6a75d, entries=7, sequenceid=42, filesize=12.1 K 2023-06-08 18:55:23,058 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for f6acffd80fe7928f97db4ca219d35d0d in 68ms, sequenceid=42, compaction requested=false 2023-06-08 18:55:23,059 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:55:23,060 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-08 18:55:23,060 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:55:23,060 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/648d6f0dde45450f8348a1e65d65d8e0 because midkey is the same as first or last row 2023-06-08 18:55:30,999 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 18:55:31,001 INFO [Listener at localhost.localdomain/35315] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 18:55:31,002 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x310229f0 to 127.0.0.1:53627 2023-06-08 18:55:31,002 DEBUG [Listener at localhost.localdomain/35315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:55:31,003 DEBUG [Listener at localhost.localdomain/35315] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 18:55:31,003 DEBUG [Listener at localhost.localdomain/35315] util.JVMClusterUtil(257): Found active master hash=506963047, stopped=false 2023-06-08 18:55:31,003 INFO [Listener at localhost.localdomain/35315] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:55:31,005 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:55:31,007 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:55:31,007 INFO [Listener at localhost.localdomain/35315] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 18:55:31,007 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:31,007 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x114ee5b8 to 127.0.0.1:53627 2023-06-08 18:55:31,007 DEBUG [Listener at localhost.localdomain/35315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:55:31,008 INFO [Listener at localhost.localdomain/35315] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,40985,1686250446879' ***** 2023-06-08 18:55:31,008 INFO [Listener at localhost.localdomain/35315] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:55:31,008 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:55:31,007 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:55:31,013 INFO [RS:0;jenkins-hbase17:40985] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:55:31,013 INFO [RS:0;jenkins-hbase17:40985] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:55:31,013 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:55:31,013 INFO [RS:0;jenkins-hbase17:40985] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:55:31,014 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(3303): Received CLOSE for f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:55:31,016 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(3303): Received CLOSE for 84f7c33633824224e98661f9285d2447 2023-06-08 18:55:31,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing f6acffd80fe7928f97db4ca219d35d0d, disabling compactions & flushes 2023-06-08 18:55:31,017 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:55:31,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:55:31,017 DEBUG [RS:0;jenkins-hbase17:40985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a5b4f7a to 127.0.0.1:53627 2023-06-08 18:55:31,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:55:31,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. after waiting 0 ms 2023-06-08 18:55:31,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:55:31,018 DEBUG [RS:0;jenkins-hbase17:40985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:55:31,018 INFO [RS:0;jenkins-hbase17:40985] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:55:31,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing f6acffd80fe7928f97db4ca219d35d0d 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-08 18:55:31,018 INFO [RS:0;jenkins-hbase17:40985] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:55:31,018 INFO [RS:0;jenkins-hbase17:40985] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:55:31,018 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:55:31,019 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-08 18:55:31,019 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, f6acffd80fe7928f97db4ca219d35d0d=TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d., 84f7c33633824224e98661f9285d2447=hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447.} 2023-06-08 18:55:31,021 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:55:31,021 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:55:31,021 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:55:31,021 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:55:31,021 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:55:31,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-08 18:55:31,022 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1504): Waiting on 1588230740, 84f7c33633824224e98661f9285d2447, f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:55:31,045 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-08 18:55:31,045 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-08 18:55:31,089 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/.tmp/info/fc136500117c4040987ac1d9e1871697 2023-06-08 18:55:31,133 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/.tmp/table/3701aa843df6442988a84462aad9f836 2023-06-08 18:55:31,148 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/.tmp/info/fc136500117c4040987ac1d9e1871697 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/info/fc136500117c4040987ac1d9e1871697 2023-06-08 18:55:31,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/info/fc136500117c4040987ac1d9e1871697, entries=20, sequenceid=14, filesize=7.4 K 2023-06-08 18:55:31,164 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/.tmp/table/3701aa843df6442988a84462aad9f836 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/table/3701aa843df6442988a84462aad9f836 2023-06-08 18:55:31,175 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/table/3701aa843df6442988a84462aad9f836, entries=4, sequenceid=14, filesize=4.8 K 2023-06-08 18:55:31,176 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 154ms, sequenceid=14, compaction requested=false 2023-06-08 18:55:31,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-08 18:55:31,187 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 18:55:31,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:55:31,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:55:31,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 18:55:31,223 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1504): Waiting on 84f7c33633824224e98661f9285d2447, f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:55:31,423 DEBUG [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1504): Waiting on 84f7c33633824224e98661f9285d2447, f6acffd80fe7928f97db4ca219d35d0d 2023-06-08 18:55:31,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/aa41bbf561e44635be1f1a5dd489482c 2023-06-08 18:55:31,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/.tmp/info/aa41bbf561e44635be1f1a5dd489482c as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/aa41bbf561e44635be1f1a5dd489482c 2023-06-08 18:55:31,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/aa41bbf561e44635be1f1a5dd489482c, entries=3, sequenceid=48, filesize=7.9 K 2023-06-08 18:55:31,504 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for f6acffd80fe7928f97db4ca219d35d0d in 486ms, sequenceid=48, compaction requested=true 2023-06-08 18:55:31,506 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/7179c815761b479299d4392f48392dd7, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/0d6ba27a4eab41e39fb60e03ec79b3ca] to archive 2023-06-08 18:55:31,512 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 18:55:31,517 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997 to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/archive/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/379a81ff084a401f91b8e5a832a4f997 2023-06-08 18:55:31,519 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/7179c815761b479299d4392f48392dd7 to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/archive/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/7179c815761b479299d4392f48392dd7 2023-06-08 18:55:31,522 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/0d6ba27a4eab41e39fb60e03ec79b3ca to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/archive/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/info/0d6ba27a4eab41e39fb60e03ec79b3ca 2023-06-08 18:55:31,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/default/TestLogRolling-testSlowSyncLogRolling/f6acffd80fe7928f97db4ca219d35d0d/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-08 18:55:31,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:55:31,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for f6acffd80fe7928f97db4ca219d35d0d: 2023-06-08 18:55:31,550 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1686250449518.f6acffd80fe7928f97db4ca219d35d0d. 2023-06-08 18:55:31,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 84f7c33633824224e98661f9285d2447, disabling compactions & flushes 2023-06-08 18:55:31,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:55:31,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:55:31,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. after waiting 0 ms 2023-06-08 18:55:31,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:55:31,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 84f7c33633824224e98661f9285d2447 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 18:55:31,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/.tmp/info/353a825d126045b6acefbbc591785604 2023-06-08 18:55:31,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/.tmp/info/353a825d126045b6acefbbc591785604 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/info/353a825d126045b6acefbbc591785604 2023-06-08 18:55:31,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/info/353a825d126045b6acefbbc591785604, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 18:55:31,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 84f7c33633824224e98661f9285d2447 in 49ms, sequenceid=6, compaction requested=false 2023-06-08 18:55:31,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/data/hbase/namespace/84f7c33633824224e98661f9285d2447/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 18:55:31,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:55:31,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 84f7c33633824224e98661f9285d2447: 2023-06-08 18:55:31,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686250448630.84f7c33633824224e98661f9285d2447. 2023-06-08 18:55:31,624 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,40985,1686250446879; all regions closed. 2023-06-08 18:55:31,625 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:55:32,036 DEBUG [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/oldWALs 2023-06-08 18:55:32,036 INFO [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C40985%2C1686250446879.meta:.meta(num 1686250448412) 2023-06-08 18:55:32,037 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/WALs/jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:55:32,048 DEBUG [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/oldWALs 2023-06-08 18:55:32,048 INFO [RS:0;jenkins-hbase17:40985] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C40985%2C1686250446879:(num 1686250505866) 2023-06-08 18:55:32,048 DEBUG [RS:0;jenkins-hbase17:40985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:55:32,048 INFO [RS:0;jenkins-hbase17:40985] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:55:32,048 INFO [RS:0;jenkins-hbase17:40985] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 18:55:32,048 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:55:32,049 INFO [RS:0;jenkins-hbase17:40985] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:40985 2023-06-08 18:55:32,051 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:55:32,056 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,40985,1686250446879 2023-06-08 18:55:32,056 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:55:32,056 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:55:32,057 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,40985,1686250446879] 2023-06-08 18:55:32,057 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,40985,1686250446879; numProcessing=1 2023-06-08 18:55:32,058 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,40985,1686250446879 already deleted, retry=false 2023-06-08 18:55:32,058 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,40985,1686250446879 expired; onlineServers=0 2023-06-08 18:55:32,058 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,35461,1686250445812' ***** 2023-06-08 18:55:32,058 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 18:55:32,059 DEBUG [M:0;jenkins-hbase17:35461] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f0aae5c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:55:32,059 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:55:32,059 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,35461,1686250445812; all regions closed. 2023-06-08 18:55:32,059 DEBUG [M:0;jenkins-hbase17:35461] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:55:32,059 DEBUG [M:0;jenkins-hbase17:35461] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 18:55:32,059 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 18:55:32,059 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250447913] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250447913,5,FailOnTimeoutGroup] 2023-06-08 18:55:32,059 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250447913] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250447913,5,FailOnTimeoutGroup] 2023-06-08 18:55:32,059 DEBUG [M:0;jenkins-hbase17:35461] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 18:55:32,061 INFO [M:0;jenkins-hbase17:35461] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 18:55:32,061 INFO [M:0;jenkins-hbase17:35461] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 18:55:32,061 INFO [M:0;jenkins-hbase17:35461] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-06-08 18:55:32,061 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 18:55:32,062 DEBUG [M:0;jenkins-hbase17:35461] master.HMaster(1512): Stopping service threads 2023-06-08 18:55:32,062 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:32,062 INFO [M:0;jenkins-hbase17:35461] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 18:55:32,062 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:55:32,062 INFO [M:0;jenkins-hbase17:35461] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 18:55:32,062 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 18:55:32,063 DEBUG [M:0;jenkins-hbase17:35461] zookeeper.ZKUtil(398): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 18:55:32,063 WARN [M:0;jenkins-hbase17:35461] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 18:55:32,063 INFO [M:0;jenkins-hbase17:35461] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 18:55:32,063 INFO [M:0;jenkins-hbase17:35461] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 18:55:32,064 DEBUG [M:0;jenkins-hbase17:35461] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:55:32,064 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:32,064 DEBUG [M:0;jenkins-hbase17:35461] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:32,064 DEBUG [M:0;jenkins-hbase17:35461] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:55:32,064 DEBUG [M:0;jenkins-hbase17:35461] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:32,064 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-06-08 18:55:32,079 INFO [M:0;jenkins-hbase17:35461] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5c58b038317c4ef2b5dda52940d4a028 2023-06-08 18:55:32,086 INFO [M:0;jenkins-hbase17:35461] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c58b038317c4ef2b5dda52940d4a028 2023-06-08 18:55:32,087 DEBUG [M:0;jenkins-hbase17:35461] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5c58b038317c4ef2b5dda52940d4a028 as hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5c58b038317c4ef2b5dda52940d4a028 2023-06-08 18:55:32,093 INFO [M:0;jenkins-hbase17:35461] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c58b038317c4ef2b5dda52940d4a028 2023-06-08 18:55:32,093 INFO [M:0;jenkins-hbase17:35461] regionserver.HStore(1080): Added hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5c58b038317c4ef2b5dda52940d4a028, entries=11, sequenceid=100, filesize=6.1 K 2023-06-08 18:55:32,095 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=100, compaction requested=false 2023-06-08 18:55:32,097 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:32,097 DEBUG [M:0;jenkins-hbase17:35461] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:55:32,097 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/MasterData/WALs/jenkins-hbase17.apache.org,35461,1686250445812 2023-06-08 18:55:32,102 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:55:32,102 INFO [M:0;jenkins-hbase17:35461] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 18:55:32,103 INFO [M:0;jenkins-hbase17:35461] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:35461 2023-06-08 18:55:32,105 DEBUG [M:0;jenkins-hbase17:35461] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,35461,1686250445812 already deleted, retry=false 2023-06-08 18:55:32,157 INFO [RS:0;jenkins-hbase17:40985] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,40985,1686250446879; zookeeper connection closed. 2023-06-08 18:55:32,157 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:55:32,158 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:40985-0x100abc9540c0001, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:55:32,158 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@45924483] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@45924483 2023-06-08 18:55:32,158 INFO [Listener at localhost.localdomain/35315] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 18:55:32,257 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:55:32,257 INFO [M:0;jenkins-hbase17:35461] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,35461,1686250445812; zookeeper connection closed. 2023-06-08 18:55:32,258 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:35461-0x100abc9540c0000, quorum=127.0.0.1:53627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:55:32,260 WARN [Listener at localhost.localdomain/35315] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:55:32,266 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:55:32,372 WARN [BP-274307131-136.243.18.41-1686250443048 heartbeating to localhost.localdomain/127.0.0.1:44823] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:55:32,372 WARN [BP-274307131-136.243.18.41-1686250443048 heartbeating to localhost.localdomain/127.0.0.1:44823] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-274307131-136.243.18.41-1686250443048 (Datanode Uuid 945f8a5a-92f5-47d5-8095-96ade6fa03e0) service to localhost.localdomain/127.0.0.1:44823 2023-06-08 18:55:32,374 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/dfs/data/data3/current/BP-274307131-136.243.18.41-1686250443048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:32,374 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/dfs/data/data4/current/BP-274307131-136.243.18.41-1686250443048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:32,375 WARN [Listener at localhost.localdomain/35315] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:55:32,377 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:55:32,481 WARN [BP-274307131-136.243.18.41-1686250443048 heartbeating to localhost.localdomain/127.0.0.1:44823] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:55:32,481 WARN [BP-274307131-136.243.18.41-1686250443048 heartbeating to localhost.localdomain/127.0.0.1:44823] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-274307131-136.243.18.41-1686250443048 (Datanode Uuid a77ec2f1-4fc8-4f67-8577-7f26fd566852) service to localhost.localdomain/127.0.0.1:44823 2023-06-08 18:55:32,481 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/dfs/data/data1/current/BP-274307131-136.243.18.41-1686250443048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:32,482 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/cluster_76611751-72e3-fc7b-6601-4b6d79a553f5/dfs/data/data2/current/BP-274307131-136.243.18.41-1686250443048] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:32,527 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 18:55:32,638 INFO [Listener at localhost.localdomain/35315] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 18:55:32,680 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 18:55:32,695 INFO [Listener at localhost.localdomain/35315] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=50 (was 10) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:44823 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:44823 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/35315 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase17:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@51963c35 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:44823 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:44823 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:44823 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=438 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=319 (was 423), ProcessCount=186 (was 186), AvailableMemoryMB=1888 (was 2105) 2023-06-08 18:55:32,707 INFO [Listener at localhost.localdomain/35315] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=51, OpenFileDescriptor=438, MaxFileDescriptor=60000, SystemLoadAverage=319, ProcessCount=186, AvailableMemoryMB=1887 2023-06-08 18:55:32,707 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 18:55:32,707 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/hadoop.log.dir so I do NOT create it in target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be 2023-06-08 18:55:32,707 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a8af8f03-5835-e9be-345a-c56f6a6bda8f/hadoop.tmp.dir so I do NOT create it in target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5, deleteOnExit=true 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/test.cache.data in system properties and HBase conf 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/hadoop.log.dir in system properties and HBase conf 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 18:55:32,708 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 18:55:32,709 DEBUG [Listener at localhost.localdomain/35315] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 18:55:32,709 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:55:32,709 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:55:32,709 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 18:55:32,709 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:55:32,709 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/nfs.dump.dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 18:55:32,710 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 18:55:32,712 WARN [Listener at localhost.localdomain/35315] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:55:32,714 WARN [Listener at localhost.localdomain/35315] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:55:32,715 WARN [Listener at localhost.localdomain/35315] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:55:32,744 WARN [Listener at localhost.localdomain/35315] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:55:32,748 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:55:32,758 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_localdomain_41403_hdfs____.i72kjw/webapp 2023-06-08 18:55:32,869 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:41403 2023-06-08 18:55:32,871 WARN [Listener at localhost.localdomain/35315] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:55:32,873 WARN [Listener at localhost.localdomain/35315] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:55:32,873 WARN [Listener at localhost.localdomain/35315] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:55:32,940 WARN [Listener at localhost.localdomain/42703] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:55:32,969 WARN [Listener at localhost.localdomain/42703] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:55:32,976 WARN [Listener at localhost.localdomain/42703] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:55:32,979 INFO [Listener at localhost.localdomain/42703] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:55:32,985 INFO [Listener at localhost.localdomain/42703] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_36535_datanode____fex797/webapp 2023-06-08 18:55:33,075 INFO [Listener at localhost.localdomain/42703] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36535 2023-06-08 18:55:33,082 WARN [Listener at localhost.localdomain/34261] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:55:33,122 WARN [Listener at localhost.localdomain/34261] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:55:33,124 WARN [Listener at localhost.localdomain/34261] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:55:33,126 INFO [Listener at localhost.localdomain/34261] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:55:33,131 INFO [Listener at localhost.localdomain/34261] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_41323_datanode____.mtf3pe/webapp 2023-06-08 18:55:33,287 INFO [Listener at localhost.localdomain/34261] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41323 2023-06-08 18:55:33,346 WARN [Listener at localhost.localdomain/44337] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:55:33,360 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x10cc9b94a5e35749: Processing first storage report for DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c from datanode 5bdd1e7a-a0c3-478c-8c82-7fb084cb0abc 2023-06-08 18:55:33,361 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x10cc9b94a5e35749: from storage DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c node DatanodeRegistration(127.0.0.1:41341, datanodeUuid=5bdd1e7a-a0c3-478c-8c82-7fb084cb0abc, infoPort=34843, infoSecurePort=0, ipcPort=34261, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:55:33,361 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x10cc9b94a5e35749: Processing first storage report for DS-950b425f-fe9a-41b9-be7d-bea254a54c27 from datanode 5bdd1e7a-a0c3-478c-8c82-7fb084cb0abc 2023-06-08 18:55:33,361 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x10cc9b94a5e35749: from storage DS-950b425f-fe9a-41b9-be7d-bea254a54c27 node DatanodeRegistration(127.0.0.1:41341, datanodeUuid=5bdd1e7a-a0c3-478c-8c82-7fb084cb0abc, infoPort=34843, infoSecurePort=0, ipcPort=34261, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:33,502 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdbf05546d2104a23: Processing first storage report for DS-0ce6a724-ca49-43f5-807f-25662b92b3c0 from datanode 5148ade7-3339-43bb-b7ef-d9e85ffd8826 2023-06-08 18:55:33,502 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdbf05546d2104a23: from storage DS-0ce6a724-ca49-43f5-807f-25662b92b3c0 node DatanodeRegistration(127.0.0.1:33281, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45513, infoSecurePort=0, ipcPort=44337, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:33,502 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdbf05546d2104a23: Processing first storage report for DS-abb882cd-7e27-4ee3-ad3a-a4f1a42fae46 from datanode 5148ade7-3339-43bb-b7ef-d9e85ffd8826 2023-06-08 18:55:33,502 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdbf05546d2104a23: from storage DS-abb882cd-7e27-4ee3-ad3a-a4f1a42fae46 node DatanodeRegistration(127.0.0.1:33281, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45513, infoSecurePort=0, ipcPort=44337, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:33,563 DEBUG [Listener at localhost.localdomain/44337] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be 2023-06-08 18:55:33,570 INFO [Listener at localhost.localdomain/44337] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/zookeeper_0, clientPort=63926, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 18:55:33,574 INFO [Listener at localhost.localdomain/44337] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63926 2023-06-08 18:55:33,574 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,576 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,611 INFO [Listener at localhost.localdomain/44337] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6 with version=8 2023-06-08 18:55:33,611 INFO [Listener at localhost.localdomain/44337] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase-staging 2023-06-08 18:55:33,613 INFO [Listener at localhost.localdomain/44337] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:55:33,613 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:33,613 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:33,613 INFO [Listener at localhost.localdomain/44337] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:55:33,613 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:33,613 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:55:33,614 INFO [Listener at localhost.localdomain/44337] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:55:33,615 INFO [Listener at localhost.localdomain/44337] ipc.NettyRpcServer(120): Bind to /136.243.18.41:35289 2023-06-08 18:55:33,615 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,617 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,618 INFO [Listener at localhost.localdomain/44337] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35289 connecting to ZooKeeper ensemble=127.0.0.1:63926 2023-06-08 18:55:33,636 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:352890x0, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:55:33,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35289-0x100abcaadde0000 connected 2023-06-08 18:55:33,672 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:55:33,672 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:55:33,673 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:55:33,675 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35289 2023-06-08 18:55:33,675 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35289 2023-06-08 18:55:33,676 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35289 2023-06-08 18:55:33,676 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35289 2023-06-08 18:55:33,677 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35289 2023-06-08 18:55:33,677 INFO [Listener at localhost.localdomain/44337] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6, hbase.cluster.distributed=false 2023-06-08 18:55:33,690 INFO [Listener at localhost.localdomain/44337] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:55:33,690 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:33,690 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:33,690 INFO [Listener at localhost.localdomain/44337] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:55:33,690 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:33,691 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:55:33,691 INFO [Listener at localhost.localdomain/44337] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:55:33,692 INFO [Listener at localhost.localdomain/44337] ipc.NettyRpcServer(120): Bind to /136.243.18.41:41765 2023-06-08 18:55:33,693 INFO [Listener at localhost.localdomain/44337] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:55:33,694 DEBUG [Listener at localhost.localdomain/44337] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:55:33,695 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,697 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,700 INFO [Listener at localhost.localdomain/44337] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41765 connecting to ZooKeeper ensemble=127.0.0.1:63926 2023-06-08 18:55:33,708 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): regionserver:417650x0, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:55:33,708 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): regionserver:417650x0, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:55:33,709 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): regionserver:417650x0, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:55:33,711 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41765 2023-06-08 18:55:33,711 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41765 2023-06-08 18:55:33,712 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41765 2023-06-08 18:55:33,712 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:417650x0, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:55:33,713 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41765 2023-06-08 18:55:33,714 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41765-0x100abcaadde0001 connected 2023-06-08 18:55:33,714 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41765 2023-06-08 18:55:33,715 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,716 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:55:33,717 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,718 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:55:33,718 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:33,718 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:55:33,719 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:55:33,722 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,35289,1686250533613 from backup master directory 2023-06-08 18:55:33,722 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:55:33,723 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,723 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:55:33,723 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:55:33,723 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,744 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/hbase.id with ID: e6645462-c4a2-499b-b4ed-10eb031876b5 2023-06-08 18:55:33,756 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:33,758 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:33,781 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2b0f1d0c to 127.0.0.1:63926 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:55:33,785 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d103392, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:55:33,785 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:55:33,786 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 18:55:33,786 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:55:33,788 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store-tmp 2023-06-08 18:55:33,803 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:33,803 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:55:33,803 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:33,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:33,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:55:33,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:33,804 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:55:33,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:55:33,804 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,807 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35289%2C1686250533613, suffix=, logDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613, archiveDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/oldWALs, maxLogs=10 2023-06-08 18:55:33,816 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613/jenkins-hbase17.apache.org%2C35289%2C1686250533613.1686250533808 2023-06-08 18:55:33,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK], DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]] 2023-06-08 18:55:33,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:55:33,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:33,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:55:33,817 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:55:33,820 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:55:33,825 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 18:55:33,825 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 18:55:33,826 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:33,827 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:55:33,829 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:55:33,834 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:55:33,837 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:55:33,840 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=875609, jitterRate=0.11339567601680756}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:55:33,840 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:55:33,855 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 18:55:33,857 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 18:55:33,858 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 18:55:33,858 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 18:55:33,859 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-08 18:55:33,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 18:55:33,860 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 18:55:33,863 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 18:55:33,865 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 18:55:33,882 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 18:55:33,883 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 18:55:33,884 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 18:55:33,884 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 18:55:33,885 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 18:55:33,888 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:33,890 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 18:55:33,897 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 18:55:33,899 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 18:55:33,901 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:55:33,901 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:55:33,901 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:33,902 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,35289,1686250533613, sessionid=0x100abcaadde0000, setting cluster-up flag (Was=false) 2023-06-08 18:55:33,904 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:33,907 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 18:55:33,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,911 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:33,914 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 18:55:33,915 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:33,916 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.hbase-snapshot/.tmp 2023-06-08 18:55:33,919 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:55:33,920 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686250563929 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 18:55:33,929 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:33,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 18:55:33,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 18:55:33,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 18:55:33,930 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:55:33,930 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 18:55:33,930 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 18:55:33,931 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 18:55:33,931 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250533931,5,FailOnTimeoutGroup] 2023-06-08 18:55:33,931 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250533931,5,FailOnTimeoutGroup] 2023-06-08 18:55:33,931 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:33,931 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 18:55:33,931 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:33,931 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:33,932 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:55:33,950 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:55:33,951 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:55:33,951 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6 2023-06-08 18:55:33,972 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:33,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:55:33,975 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/info 2023-06-08 18:55:33,976 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:55:33,977 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:33,977 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:55:33,979 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:55:33,979 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:55:33,980 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:33,980 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:55:33,982 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/table 2023-06-08 18:55:33,983 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:55:33,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:33,985 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740 2023-06-08 18:55:33,986 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740 2023-06-08 18:55:33,989 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:55:33,990 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:55:33,993 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:55:33,994 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807818, jitterRate=0.027194052934646606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:55:33,994 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:55:33,994 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:55:33,994 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:55:33,994 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:55:33,994 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:55:33,994 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:55:33,995 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:55:33,995 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:55:33,997 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:55:33,997 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 18:55:33,997 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 18:55:34,001 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 18:55:34,003 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 18:55:34,018 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(951): ClusterId : e6645462-c4a2-499b-b4ed-10eb031876b5 2023-06-08 18:55:34,019 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:55:34,022 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:55:34,022 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:55:34,024 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:55:34,026 DEBUG [RS:0;jenkins-hbase17:41765] zookeeper.ReadOnlyZKClient(139): Connect 0x67b1ab40 to 127.0.0.1:63926 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:55:34,038 DEBUG [RS:0;jenkins-hbase17:41765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@697659c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:55:34,038 DEBUG [RS:0;jenkins-hbase17:41765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d49c030, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:55:34,046 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:41765 2023-06-08 18:55:34,046 INFO [RS:0;jenkins-hbase17:41765] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:55:34,046 INFO [RS:0;jenkins-hbase17:41765] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:55:34,046 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:55:34,047 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,35289,1686250533613 with isa=jenkins-hbase17.apache.org/136.243.18.41:41765, startcode=1686250533689 2023-06-08 18:55:34,047 DEBUG [RS:0;jenkins-hbase17:41765] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:55:34,050 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:37393, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:55:34,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,052 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6 2023-06-08 18:55:34,052 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42703 2023-06-08 18:55:34,052 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:55:34,053 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:55:34,054 DEBUG [RS:0;jenkins-hbase17:41765] zookeeper.ZKUtil(162): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,054 WARN [RS:0;jenkins-hbase17:41765] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:55:34,054 INFO [RS:0;jenkins-hbase17:41765] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:55:34,054 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,054 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,41765,1686250533689] 2023-06-08 18:55:34,059 DEBUG [RS:0;jenkins-hbase17:41765] zookeeper.ZKUtil(162): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,060 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:55:34,060 INFO [RS:0;jenkins-hbase17:41765] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:55:34,064 INFO [RS:0;jenkins-hbase17:41765] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:55:34,064 INFO [RS:0;jenkins-hbase17:41765] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:55:34,064 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,068 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:55:34,069 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,070 DEBUG [RS:0;jenkins-hbase17:41765] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:34,071 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,071 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,071 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,081 INFO [RS:0;jenkins-hbase17:41765] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:55:34,081 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,41765,1686250533689-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,091 INFO [RS:0;jenkins-hbase17:41765] regionserver.Replication(203): jenkins-hbase17.apache.org,41765,1686250533689 started 2023-06-08 18:55:34,091 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,41765,1686250533689, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:41765, sessionid=0x100abcaadde0001 2023-06-08 18:55:34,091 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:55:34,091 DEBUG [RS:0;jenkins-hbase17:41765] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,091 DEBUG [RS:0;jenkins-hbase17:41765] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41765,1686250533689' 2023-06-08 18:55:34,091 DEBUG [RS:0;jenkins-hbase17:41765] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:55:34,092 DEBUG [RS:0;jenkins-hbase17:41765] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,41765,1686250533689' 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:55:34,093 DEBUG [RS:0;jenkins-hbase17:41765] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:55:34,094 INFO [RS:0;jenkins-hbase17:41765] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:55:34,094 INFO [RS:0;jenkins-hbase17:41765] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:55:34,153 DEBUG [jenkins-hbase17:35289] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 18:55:34,154 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,41765,1686250533689, state=OPENING 2023-06-08 18:55:34,155 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 18:55:34,156 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:34,157 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,41765,1686250533689}] 2023-06-08 18:55:34,157 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:55:34,196 INFO [RS:0;jenkins-hbase17:41765] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41765%2C1686250533689, suffix=, logDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689, archiveDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/oldWALs, maxLogs=32 2023-06-08 18:55:34,206 INFO [RS:0;jenkins-hbase17:41765] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.1686250534198 2023-06-08 18:55:34,207 DEBUG [RS:0;jenkins-hbase17:41765] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK], DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] 2023-06-08 18:55:34,310 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,311 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:55:34,313 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:39834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:55:34,318 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 18:55:34,318 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:55:34,321 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689, archiveDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/oldWALs, maxLogs=32 2023-06-08 18:55:34,338 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta.1686250534323.meta 2023-06-08 18:55:34,338 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK], DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]] 2023-06-08 18:55:34,338 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:55:34,338 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 18:55:34,339 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 18:55:34,340 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 18:55:34,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 18:55:34,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:34,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 18:55:34,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 18:55:34,342 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:55:34,345 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/info 2023-06-08 18:55:34,345 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/info 2023-06-08 18:55:34,345 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:55:34,346 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:34,346 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:55:34,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:55:34,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:55:34,347 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:55:34,348 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:34,348 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:55:34,349 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/table 2023-06-08 18:55:34,349 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740/table 2023-06-08 18:55:34,350 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:55:34,350 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:34,352 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740 2023-06-08 18:55:34,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/meta/1588230740 2023-06-08 18:55:34,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:55:34,358 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:55:34,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=831327, jitterRate=0.057087093591690063}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:55:34,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:55:34,360 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686250534310 2023-06-08 18:55:34,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 18:55:34,364 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 18:55:34,365 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,41765,1686250533689, state=OPEN 2023-06-08 18:55:34,367 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 18:55:34,367 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:55:34,369 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 18:55:34,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,41765,1686250533689 in 211 msec 2023-06-08 18:55:34,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 18:55:34,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 373 msec 2023-06-08 18:55:34,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 456 msec 2023-06-08 18:55:34,374 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686250534374, completionTime=-1 2023-06-08 18:55:34,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 18:55:34,375 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 18:55:34,377 DEBUG [hconnection-0xbe4c24-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:55:34,379 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:39842, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:55:34,381 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 18:55:34,381 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686250594381 2023-06-08 18:55:34,381 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686250654381 2023-06-08 18:55:34,381 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35289,1686250533613-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35289,1686250533613-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35289,1686250533613-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:35289, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 18:55:34,387 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:55:34,388 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 18:55:34,389 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 18:55:34,391 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:55:34,392 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:55:34,394 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/hbase/namespace/62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,395 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/hbase/namespace/62d543670fdace48ca11e453928cc34f empty. 2023-06-08 18:55:34,396 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/hbase/namespace/62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,396 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 18:55:34,409 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 18:55:34,410 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 62d543670fdace48ca11e453928cc34f, NAME => 'hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp 2023-06-08 18:55:34,421 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:34,421 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 62d543670fdace48ca11e453928cc34f, disabling compactions & flushes 2023-06-08 18:55:34,421 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,421 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,421 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. after waiting 0 ms 2023-06-08 18:55:34,421 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,421 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,421 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 62d543670fdace48ca11e453928cc34f: 2023-06-08 18:55:34,424 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:55:34,426 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250534426"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250534426"}]},"ts":"1686250534426"} 2023-06-08 18:55:34,429 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:55:34,431 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:55:34,431 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250534431"}]},"ts":"1686250534431"} 2023-06-08 18:55:34,433 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 18:55:34,437 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=62d543670fdace48ca11e453928cc34f, ASSIGN}] 2023-06-08 18:55:34,439 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=62d543670fdace48ca11e453928cc34f, ASSIGN 2023-06-08 18:55:34,440 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=62d543670fdace48ca11e453928cc34f, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,41765,1686250533689; forceNewPlan=false, retain=false 2023-06-08 18:55:34,592 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=62d543670fdace48ca11e453928cc34f, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,592 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250534591"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250534591"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250534591"}]},"ts":"1686250534591"} 2023-06-08 18:55:34,595 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 62d543670fdace48ca11e453928cc34f, server=jenkins-hbase17.apache.org,41765,1686250533689}] 2023-06-08 18:55:34,754 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 62d543670fdace48ca11e453928cc34f, NAME => 'hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:55:34,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,755 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:34,755 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,755 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,756 INFO [StoreOpener-62d543670fdace48ca11e453928cc34f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,758 DEBUG [StoreOpener-62d543670fdace48ca11e453928cc34f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/namespace/62d543670fdace48ca11e453928cc34f/info 2023-06-08 18:55:34,758 DEBUG [StoreOpener-62d543670fdace48ca11e453928cc34f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/namespace/62d543670fdace48ca11e453928cc34f/info 2023-06-08 18:55:34,759 INFO [StoreOpener-62d543670fdace48ca11e453928cc34f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 62d543670fdace48ca11e453928cc34f columnFamilyName info 2023-06-08 18:55:34,760 INFO [StoreOpener-62d543670fdace48ca11e453928cc34f-1] regionserver.HStore(310): Store=62d543670fdace48ca11e453928cc34f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:34,762 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/namespace/62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/namespace/62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 62d543670fdace48ca11e453928cc34f 2023-06-08 18:55:34,769 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/hbase/namespace/62d543670fdace48ca11e453928cc34f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:55:34,769 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 62d543670fdace48ca11e453928cc34f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=800241, jitterRate=0.017559155821800232}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:55:34,769 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 62d543670fdace48ca11e453928cc34f: 2023-06-08 18:55:34,771 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f., pid=6, masterSystemTime=1686250534748 2023-06-08 18:55:34,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,774 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:55:34,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=62d543670fdace48ca11e453928cc34f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,775 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250534775"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250534775"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250534775"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250534775"}]},"ts":"1686250534775"} 2023-06-08 18:55:34,781 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 18:55:34,781 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 62d543670fdace48ca11e453928cc34f, server=jenkins-hbase17.apache.org,41765,1686250533689 in 183 msec 2023-06-08 18:55:34,784 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 18:55:34,784 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=62d543670fdace48ca11e453928cc34f, ASSIGN in 344 msec 2023-06-08 18:55:34,785 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:55:34,786 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250534785"}]},"ts":"1686250534785"} 2023-06-08 18:55:34,788 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 18:55:34,790 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 18:55:34,791 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:55:34,791 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:55:34,791 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:34,794 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 404 msec 2023-06-08 18:55:34,796 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 18:55:34,806 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:55:34,810 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-08 18:55:34,818 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 18:55:34,827 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:55:34,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-06-08 18:55:34,842 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:55:34,844 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 18:55:34,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.121sec 2023-06-08 18:55:34,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 18:55:34,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 18:55:34,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 18:55:34,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35289,1686250533613-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 18:55:34,844 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35289,1686250533613-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 18:55:34,847 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 18:55:34,920 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ReadOnlyZKClient(139): Connect 0x6fb83ba8 to 127.0.0.1:63926 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:55:34,924 DEBUG [Listener at localhost.localdomain/44337] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3acfef73, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:55:34,926 DEBUG [hconnection-0x5481ffb1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:55:34,928 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:39850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:55:34,930 INFO [Listener at localhost.localdomain/44337] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:55:34,931 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:34,934 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 18:55:34,934 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:55:34,935 INFO [Listener at localhost.localdomain/44337] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 18:55:34,947 INFO [Listener at localhost.localdomain/44337] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:55:34,947 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:34,947 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:34,947 INFO [Listener at localhost.localdomain/44337] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:55:34,947 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:55:34,948 INFO [Listener at localhost.localdomain/44337] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:55:34,948 INFO [Listener at localhost.localdomain/44337] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:55:34,949 INFO [Listener at localhost.localdomain/44337] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36375 2023-06-08 18:55:34,950 INFO [Listener at localhost.localdomain/44337] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:55:34,951 DEBUG [Listener at localhost.localdomain/44337] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:55:34,952 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:34,953 INFO [Listener at localhost.localdomain/44337] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:55:34,954 INFO [Listener at localhost.localdomain/44337] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36375 connecting to ZooKeeper ensemble=127.0.0.1:63926 2023-06-08 18:55:34,958 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:363750x0, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:55:34,959 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(162): regionserver:363750x0, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:55:34,960 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36375-0x100abcaadde0005 connected 2023-06-08 18:55:34,961 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(162): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-08 18:55:34,962 DEBUG [Listener at localhost.localdomain/44337] zookeeper.ZKUtil(164): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:55:34,963 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36375 2023-06-08 18:55:34,963 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36375 2023-06-08 18:55:34,963 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36375 2023-06-08 18:55:34,964 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36375 2023-06-08 18:55:34,964 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36375 2023-06-08 18:55:34,970 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(951): ClusterId : e6645462-c4a2-499b-b4ed-10eb031876b5 2023-06-08 18:55:34,971 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:55:34,973 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:55:34,973 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:55:34,975 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:55:34,976 DEBUG [RS:1;jenkins-hbase17:36375] zookeeper.ReadOnlyZKClient(139): Connect 0x40c24db7 to 127.0.0.1:63926 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:55:34,981 DEBUG [RS:1;jenkins-hbase17:36375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e05bbdf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:55:34,982 DEBUG [RS:1;jenkins-hbase17:36375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a3a26c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:55:34,989 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:36375 2023-06-08 18:55:34,990 INFO [RS:1;jenkins-hbase17:36375] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:55:34,990 INFO [RS:1;jenkins-hbase17:36375] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:55:34,990 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:55:34,991 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,35289,1686250533613 with isa=jenkins-hbase17.apache.org/136.243.18.41:36375, startcode=1686250534947 2023-06-08 18:55:34,991 DEBUG [RS:1;jenkins-hbase17:36375] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:55:34,994 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60791, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:55:34,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:34,995 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6 2023-06-08 18:55:34,995 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42703 2023-06-08 18:55:34,995 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:55:34,996 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:55:34,996 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:55:34,997 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,36375,1686250534947] 2023-06-08 18:55:34,997 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:34,998 DEBUG [RS:1;jenkins-hbase17:36375] zookeeper.ZKUtil(162): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:34,998 WARN [RS:1;jenkins-hbase17:36375] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:55:34,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:34,998 INFO [RS:1;jenkins-hbase17:36375] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:55:34,999 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,004 DEBUG [RS:1;jenkins-hbase17:36375] zookeeper.ZKUtil(162): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,004 DEBUG [RS:1;jenkins-hbase17:36375] zookeeper.ZKUtil(162): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:55:35,005 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:55:35,006 INFO [RS:1;jenkins-hbase17:36375] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:55:35,010 INFO [RS:1;jenkins-hbase17:36375] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:55:35,011 INFO [RS:1;jenkins-hbase17:36375] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:55:35,011 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:35,011 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:55:35,013 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:35,013 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,013 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,013 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,014 DEBUG [RS:1;jenkins-hbase17:36375] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:55:35,015 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:35,015 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:35,015 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:35,029 INFO [RS:1;jenkins-hbase17:36375] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:55:35,030 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36375,1686250534947-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:55:35,046 INFO [RS:1;jenkins-hbase17:36375] regionserver.Replication(203): jenkins-hbase17.apache.org,36375,1686250534947 started 2023-06-08 18:55:35,046 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,36375,1686250534947, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:36375, sessionid=0x100abcaadde0005 2023-06-08 18:55:35,046 INFO [Listener at localhost.localdomain/44337] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase17:36375,5,FailOnTimeoutGroup] 2023-06-08 18:55:35,046 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:55:35,046 INFO [Listener at localhost.localdomain/44337] wal.TestLogRolling(323): Replication=2 2023-06-08 18:55:35,046 DEBUG [RS:1;jenkins-hbase17:36375] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,047 DEBUG [RS:1;jenkins-hbase17:36375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36375,1686250534947' 2023-06-08 18:55:35,047 DEBUG [RS:1;jenkins-hbase17:36375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:55:35,048 DEBUG [RS:1;jenkins-hbase17:36375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:55:35,049 DEBUG [Listener at localhost.localdomain/44337] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 18:55:35,049 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:55:35,049 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:55:35,049 DEBUG [RS:1;jenkins-hbase17:36375] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,050 DEBUG [RS:1;jenkins-hbase17:36375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,36375,1686250534947' 2023-06-08 18:55:35,050 DEBUG [RS:1;jenkins-hbase17:36375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:55:35,051 DEBUG [RS:1;jenkins-hbase17:36375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:55:35,051 DEBUG [RS:1;jenkins-hbase17:36375] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:55:35,051 INFO [RS:1;jenkins-hbase17:36375] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:55:35,052 INFO [RS:1;jenkins-hbase17:36375] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:55:35,053 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:39428, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 18:55:35,055 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 18:55:35,055 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 18:55:35,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:55:35,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-08 18:55:35,060 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:55:35,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-08 18:55:35,061 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:55:35,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:55:35,063 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,064 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395 empty. 2023-06-08 18:55:35,064 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,065 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-08 18:55:35,078 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-08 18:55:35,079 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 475489ca47d830e9e063ab452deba395, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/.tmp 2023-06-08 18:55:35,088 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:35,088 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 475489ca47d830e9e063ab452deba395, disabling compactions & flushes 2023-06-08 18:55:35,088 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,088 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,088 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. after waiting 0 ms 2023-06-08 18:55:35,088 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,088 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,088 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 475489ca47d830e9e063ab452deba395: 2023-06-08 18:55:35,092 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:55:35,093 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686250535093"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250535093"}]},"ts":"1686250535093"} 2023-06-08 18:55:35,095 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:55:35,096 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:55:35,097 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250535096"}]},"ts":"1686250535096"} 2023-06-08 18:55:35,098 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-08 18:55:35,105 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-06-08 18:55:35,107 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-08 18:55:35,107 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-08 18:55:35,107 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-08 18:55:35,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=475489ca47d830e9e063ab452deba395, ASSIGN}] 2023-06-08 18:55:35,109 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=475489ca47d830e9e063ab452deba395, ASSIGN 2023-06-08 18:55:35,110 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=475489ca47d830e9e063ab452deba395, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,36375,1686250534947; forceNewPlan=false, retain=false 2023-06-08 18:55:35,154 INFO [RS:1;jenkins-hbase17:36375] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36375%2C1686250534947, suffix=, logDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947, archiveDir=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/oldWALs, maxLogs=32 2023-06-08 18:55:35,169 INFO [RS:1;jenkins-hbase17:36375] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156 2023-06-08 18:55:35,169 DEBUG [RS:1;jenkins-hbase17:36375] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK], DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]] 2023-06-08 18:55:35,263 INFO [jenkins-hbase17:35289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-08 18:55:35,264 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=475489ca47d830e9e063ab452deba395, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,264 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686250535264"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250535264"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250535264"}]},"ts":"1686250535264"} 2023-06-08 18:55:35,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 475489ca47d830e9e063ab452deba395, server=jenkins-hbase17.apache.org,36375,1686250534947}] 2023-06-08 18:55:35,421 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,421 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:55:35,424 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:36472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:55:35,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 475489ca47d830e9e063ab452deba395, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:55:35,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:55:35,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,431 INFO [StoreOpener-475489ca47d830e9e063ab452deba395-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,433 DEBUG [StoreOpener-475489ca47d830e9e063ab452deba395-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info 2023-06-08 18:55:35,433 DEBUG [StoreOpener-475489ca47d830e9e063ab452deba395-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info 2023-06-08 18:55:35,433 INFO [StoreOpener-475489ca47d830e9e063ab452deba395-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 475489ca47d830e9e063ab452deba395 columnFamilyName info 2023-06-08 18:55:35,434 INFO [StoreOpener-475489ca47d830e9e063ab452deba395-1] regionserver.HStore(310): Store=475489ca47d830e9e063ab452deba395/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:55:35,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 475489ca47d830e9e063ab452deba395 2023-06-08 18:55:35,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:55:35,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 475489ca47d830e9e063ab452deba395; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=843082, jitterRate=0.07203462719917297}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:55:35,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 475489ca47d830e9e063ab452deba395: 2023-06-08 18:55:35,445 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395., pid=11, masterSystemTime=1686250535421 2023-06-08 18:55:35,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:35,450 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=475489ca47d830e9e063ab452deba395, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:55:35,450 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686250535450"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250535450"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250535450"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250535450"}]},"ts":"1686250535450"} 2023-06-08 18:55:35,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 18:55:35,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 475489ca47d830e9e063ab452deba395, server=jenkins-hbase17.apache.org,36375,1686250534947 in 186 msec 2023-06-08 18:55:35,460 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 18:55:35,461 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=475489ca47d830e9e063ab452deba395, ASSIGN in 350 msec 2023-06-08 18:55:35,462 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:55:35,462 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250535462"}]},"ts":"1686250535462"} 2023-06-08 18:55:35,464 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-08 18:55:35,467 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:55:35,468 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 411 msec 2023-06-08 18:55:36,725 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 18:55:40,060 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 18:55:40,061 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 18:55:41,006 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-08 18:55:45,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:55:45,065 INFO [Listener at localhost.localdomain/44337] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-08 18:55:45,067 DEBUG [Listener at localhost.localdomain/44337] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-08 18:55:45,067 DEBUG [Listener at localhost.localdomain/44337] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:55:45,079 WARN [Listener at localhost.localdomain/44337] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:55:45,081 WARN [Listener at localhost.localdomain/44337] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:55:45,082 INFO [Listener at localhost.localdomain/44337] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:55:45,087 INFO [Listener at localhost.localdomain/44337] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_42841_datanode____fco732/webapp 2023-06-08 18:55:45,159 INFO [Listener at localhost.localdomain/44337] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42841 2023-06-08 18:55:45,171 WARN [Listener at localhost.localdomain/33925] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:55:45,188 WARN [Listener at localhost.localdomain/33925] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:55:45,191 WARN [Listener at localhost.localdomain/33925] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:55:45,192 INFO [Listener at localhost.localdomain/33925] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:55:45,197 INFO [Listener at localhost.localdomain/33925] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_44603_datanode____.rnqrfk/webapp 2023-06-08 18:55:45,239 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbaad4a3baac2bf3: Processing first storage report for DS-e076b889-57b6-448a-8020-9c29e2da26b5 from datanode 53d1e374-b48c-47ca-82e4-ce8d5724016c 2023-06-08 18:55:45,239 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbaad4a3baac2bf3: from storage DS-e076b889-57b6-448a-8020-9c29e2da26b5 node DatanodeRegistration(127.0.0.1:35211, datanodeUuid=53d1e374-b48c-47ca-82e4-ce8d5724016c, infoPort=36177, infoSecurePort=0, ipcPort=33925, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:45,239 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbaad4a3baac2bf3: Processing first storage report for DS-6c3a17c3-771d-4d54-b0c0-5bcf3c24093d from datanode 53d1e374-b48c-47ca-82e4-ce8d5724016c 2023-06-08 18:55:45,240 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbaad4a3baac2bf3: from storage DS-6c3a17c3-771d-4d54-b0c0-5bcf3c24093d node DatanodeRegistration(127.0.0.1:35211, datanodeUuid=53d1e374-b48c-47ca-82e4-ce8d5724016c, infoPort=36177, infoSecurePort=0, ipcPort=33925, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:45,316 INFO [Listener at localhost.localdomain/33925] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44603 2023-06-08 18:55:45,333 WARN [Listener at localhost.localdomain/46633] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:55:45,352 WARN [Listener at localhost.localdomain/46633] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:55:45,354 WARN [Listener at localhost.localdomain/46633] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:55:45,355 INFO [Listener at localhost.localdomain/46633] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:55:45,361 INFO [Listener at localhost.localdomain/46633] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_44215_datanode____.3e4343/webapp 2023-06-08 18:55:45,468 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa595650eaa66d4f8: Processing first storage report for DS-839b40f3-38a2-487b-bda4-0238473e7517 from datanode 5b1f5b68-046d-4f20-9ba9-212737299826 2023-06-08 18:55:45,468 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa595650eaa66d4f8: from storage DS-839b40f3-38a2-487b-bda4-0238473e7517 node DatanodeRegistration(127.0.0.1:39905, datanodeUuid=5b1f5b68-046d-4f20-9ba9-212737299826, infoPort=39567, infoSecurePort=0, ipcPort=46633, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:55:45,469 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa595650eaa66d4f8: Processing first storage report for DS-39f73f7a-a332-4c20-b7c9-dbb4814fdec6 from datanode 5b1f5b68-046d-4f20-9ba9-212737299826 2023-06-08 18:55:45,469 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa595650eaa66d4f8: from storage DS-39f73f7a-a332-4c20-b7c9-dbb4814fdec6 node DatanodeRegistration(127.0.0.1:39905, datanodeUuid=5b1f5b68-046d-4f20-9ba9-212737299826, infoPort=39567, infoSecurePort=0, ipcPort=46633, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:45,511 INFO [Listener at localhost.localdomain/46633] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44215 2023-06-08 18:55:45,519 WARN [Listener at localhost.localdomain/42281] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:55:45,577 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4c06c4e7d7b94427: Processing first storage report for DS-88b38ff7-810c-4caf-8928-d261dba89922 from datanode cfed3381-ec20-46c4-8842-44667a8dc53b 2023-06-08 18:55:45,577 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4c06c4e7d7b94427: from storage DS-88b38ff7-810c-4caf-8928-d261dba89922 node DatanodeRegistration(127.0.0.1:33635, datanodeUuid=cfed3381-ec20-46c4-8842-44667a8dc53b, infoPort=46555, infoSecurePort=0, ipcPort=42281, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:45,577 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4c06c4e7d7b94427: Processing first storage report for DS-4d676633-227f-4a75-a7bf-eb7656ca5e8f from datanode cfed3381-ec20-46c4-8842-44667a8dc53b 2023-06-08 18:55:45,577 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4c06c4e7d7b94427: from storage DS-4d676633-227f-4a75-a7bf-eb7656ca5e8f node DatanodeRegistration(127.0.0.1:33635, datanodeUuid=cfed3381-ec20-46c4-8842-44667a8dc53b, infoPort=46555, infoSecurePort=0, ipcPort=42281, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:55:45,626 WARN [Listener at localhost.localdomain/42281] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:55:45,628 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:55:45,630 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 18:55:45,630 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 18:55:45,631 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 18:55:45,640 WARN [DataStreamer for file /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.1686250534198 block BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK], DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]) is bad. 2023-06-08 18:55:45,640 WARN [PacketResponder: BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33281]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,640 WARN [DataStreamer for file /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613/jenkins-hbase17.apache.org%2C35289%2C1686250533613.1686250533808 block BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK], DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]) is bad. 2023-06-08 18:55:45,641 WARN [PacketResponder: BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33281]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,640 WARN [DataStreamer for file /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156 block BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK], DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]) is bad. 2023-06-08 18:55:45,640 WARN [DataStreamer for file /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta.1686250534323.meta block BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK], DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]) is bad. 2023-06-08 18:55:45,645 WARN [PacketResponder: BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33281]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,648 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_68184238_17 at /127.0.0.1:46854 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46854 dst: /127.0.0.1:41341 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,648 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:46876 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46876 dst: /127.0.0.1:41341 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,648 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:46920 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46920 dst: /127.0.0.1:41341 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,655 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:55:45,660 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:46866 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46866 dst: /127.0.0.1:41341 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41341 remote=/127.0.0.1:46866]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,661 WARN [PacketResponder: BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41341]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,668 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:48024 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33281:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48024 dst: /127.0.0.1:33281 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,758 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:55:45,758 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651463838-136.243.18.41-1686250532717 (Datanode Uuid 5148ade7-3339-43bb-b7ef-d9e85ffd8826) service to localhost.localdomain/127.0.0.1:42703 2023-06-08 18:55:45,760 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data3/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:45,760 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_68184238_17 at /127.0.0.1:47992 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33281:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47992 dst: /127.0.0.1:33281 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,760 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:48082 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:33281:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48082 dst: /127.0.0.1:33281 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,760 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:48040 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33281:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48040 dst: /127.0.0.1:33281 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,761 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data4/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:45,764 WARN [Listener at localhost.localdomain/42281] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:55:45,765 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:55:45,765 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:55:45,767 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:55:45,768 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:55:45,776 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:55:45,878 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:52094 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52094 dst: /127.0.0.1:41341 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,879 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:55:45,879 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_68184238_17 at /127.0.0.1:52068 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52068 dst: /127.0.0.1:41341 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,879 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:52078 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52078 dst: /127.0.0.1:41341 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,879 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:52104 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52104 dst: /127.0.0.1:41341 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:45,880 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651463838-136.243.18.41-1686250532717 (Datanode Uuid 5bdd1e7a-a0c3-478c-8c82-7fb084cb0abc) service to localhost.localdomain/127.0.0.1:42703 2023-06-08 18:55:45,882 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data1/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:45,882 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data2/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:45,886 DEBUG [Listener at localhost.localdomain/42281] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:55:45,889 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:60010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:55:45,891 WARN [RS:1;jenkins-hbase17:36375.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:55:45,891 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36375%2C1686250534947:(num 1686250535156) roll requested 2023-06-08 18:55:45,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36375] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:55:45,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36375] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:60010 deadline: 1686250555889, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-08 18:55:45,917 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-08 18:55:45,917 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 2023-06-08 18:55:45,918 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK], DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK]] 2023-06-08 18:55:45,918 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156 is not closed yet, will try archiving it next time 2023-06-08 18:55:45,918 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:55:45,918 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:55:45,919 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156 to hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/oldWALs/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250535156 2023-06-08 18:55:57,930 INFO [Listener at localhost.localdomain/42281] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 2023-06-08 18:55:57,931 WARN [Listener at localhost.localdomain/42281] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:55:57,932 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:55:57,933 WARN [DataStreamer for file /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 block BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK], DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK]) is bad. 2023-06-08 18:55:57,940 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:55:57,944 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:41232 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:35211:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41232 dst: /127.0.0.1:35211 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35211 remote=/127.0.0.1:41232]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:57,944 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:32778 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:39905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32778 dst: /127.0.0.1:39905 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:57,947 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:55:57,947 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651463838-136.243.18.41-1686250532717 (Datanode Uuid 5b1f5b68-046d-4f20-9ba9-212737299826) service to localhost.localdomain/127.0.0.1:42703 2023-06-08 18:55:57,947 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data7/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:57,948 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data8/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:55:57,956 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK]] 2023-06-08 18:55:57,956 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK]] 2023-06-08 18:55:57,956 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36375%2C1686250534947:(num 1686250545891) roll requested 2023-06-08 18:55:57,962 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741840_1021 2023-06-08 18:55:57,964 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK] 2023-06-08 18:55:57,971 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:39206 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741841_1022]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data6/current]'}, localName='127.0.0.1:35211', datanodeUuid='53d1e374-b48c-47ca-82e4-ce8d5724016c', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741841_1022 to mirror 127.0.0.1:33281: java.net.ConnectException: Connection refused 2023-06-08 18:55:57,971 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741841_1022 2023-06-08 18:55:57,972 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:39206 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:35211:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39206 dst: /127.0.0.1:35211 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:55:57,972 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] 2023-06-08 18:55:57,983 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250557956 2023-06-08 18:55:57,983 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK], DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:55:57,983 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 is not closed yet, will try archiving it next time 2023-06-08 18:56:00,259 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@78530a77] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35211, datanodeUuid=53d1e374-b48c-47ca-82e4-ce8d5724016c, infoPort=36177, infoSecurePort=0, ipcPort=33925, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741839_1020 to 127.0.0.1:41341 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:01,962 WARN [Listener at localhost.localdomain/42281] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:01,963 WARN [ResponseProcessor for block BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:01,964 WARN [DataStreamer for file /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250557956 block BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023] hdfs.DataStreamer(1548): Error Recovery for BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK], DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK]) is bad. 2023-06-08 18:56:01,968 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38908 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:33635:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38908 dst: /127.0.0.1:33635 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33635 remote=/127.0.0.1:38908]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:01,968 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:01,968 WARN [PacketResponder: BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33635]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:01,971 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:39222 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:35211:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39222 dst: /127.0.0.1:35211 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:02,077 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:02,077 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651463838-136.243.18.41-1686250532717 (Datanode Uuid 53d1e374-b48c-47ca-82e4-ce8d5724016c) service to localhost.localdomain/127.0.0.1:42703 2023-06-08 18:56:02,078 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data5/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:02,078 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data6/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:02,088 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:02,088 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:02,088 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36375%2C1686250534947:(num 1686250557956) roll requested 2023-06-08 18:56:02,098 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38928 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741843_1025]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data10/current]'}, localName='127.0.0.1:33635', datanodeUuid='cfed3381-ec20-46c4-8842-44667a8dc53b', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741843_1025 to mirror 127.0.0.1:33281: java.net.ConnectException: Connection refused 2023-06-08 18:56:02,098 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741843_1025 2023-06-08 18:56:02,098 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38928 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741843_1025]] datanode.DataXceiver(323): 127.0.0.1:33635:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38928 dst: /127.0.0.1:33635 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:02,099 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] 2023-06-08 18:56:02,100 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741844_1026 2023-06-08 18:56:02,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36375] regionserver.HRegion(9158): Flush requested on 475489ca47d830e9e063ab452deba395 2023-06-08 18:56:02,102 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 475489ca47d830e9e063ab452deba395 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:56:02,109 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK] 2023-06-08 18:56:02,111 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741845_1027 2023-06-08 18:56:02,116 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK] 2023-06-08 18:56:02,133 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38942 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741846_1028]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data10/current]'}, localName='127.0.0.1:33635', datanodeUuid='cfed3381-ec20-46c4-8842-44667a8dc53b', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741846_1028 to mirror 127.0.0.1:39905: java.net.ConnectException: Connection refused 2023-06-08 18:56:02,133 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741846_1028 2023-06-08 18:56:02,133 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38942 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741846_1028]] datanode.DataXceiver(323): 127.0.0.1:33635:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38942 dst: /127.0.0.1:33635 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:02,134 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK] 2023-06-08 18:56:02,135 WARN [IPC Server handler 0 on default port 42703] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-08 18:56:02,136 WARN [IPC Server handler 0 on default port 42703] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-08 18:56:02,136 WARN [IPC Server handler 0 on default port 42703] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-08 18:56:02,141 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38950 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data10/current]'}, localName='127.0.0.1:33635', datanodeUuid='cfed3381-ec20-46c4-8842-44667a8dc53b', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741847_1029 to mirror 127.0.0.1:39905: java.net.ConnectException: Connection refused 2023-06-08 18:56:02,141 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741847_1029 2023-06-08 18:56:02,141 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1107101550_17 at /127.0.0.1:38950 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:33635:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38950 dst: /127.0.0.1:33635 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:02,142 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK] 2023-06-08 18:56:02,143 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741849_1031 2023-06-08 18:56:02,144 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK] 2023-06-08 18:56:02,145 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741850_1032 2023-06-08 18:56:02,146 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK] 2023-06-08 18:56:02,148 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741851_1033 2023-06-08 18:56:02,151 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250557956 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250562088 2023-06-08 18:56:02,151 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:02,151 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250557956 is not closed yet, will try archiving it next time 2023-06-08 18:56:02,153 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] 2023-06-08 18:56:02,154 WARN [IPC Server handler 2 on default port 42703] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-08 18:56:02,154 WARN [IPC Server handler 2 on default port 42703] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-08 18:56:02,154 WARN [IPC Server handler 2 on default port 42703] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-08 18:56:02,329 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:02,329 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:02,329 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C36375%2C1686250534947:(num 1686250562088) roll requested 2023-06-08 18:56:02,332 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741853_1035 2023-06-08 18:56:02,333 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33281,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK] 2023-06-08 18:56:02,335 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741854_1036 2023-06-08 18:56:02,336 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK] 2023-06-08 18:56:02,337 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741855_1037 2023-06-08 18:56:02,338 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK] 2023-06-08 18:56:02,340 WARN [Thread-661] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741856_1038 2023-06-08 18:56:02,341 WARN [Thread-661] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK] 2023-06-08 18:56:02,343 WARN [IPC Server handler 4 on default port 42703] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-08 18:56:02,343 WARN [IPC Server handler 4 on default port 42703] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-08 18:56:02,343 WARN [IPC Server handler 4 on default port 42703] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-08 18:56:02,360 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250562088 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250562329 2023-06-08 18:56:02,362 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:02,363 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250557956 is not closed yet, will try archiving it next time 2023-06-08 18:56:02,363 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250562088 is not closed yet, will try archiving it next time 2023-06-08 18:56:02,373 DEBUG [Close-WAL-Writer-1] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250557956 is not closed yet, will try archiving it next time 2023-06-08 18:56:02,533 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-08 18:56:02,562 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/.tmp/info/8971ac4d680643d6a526b98f26eae796 2023-06-08 18:56:02,591 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/.tmp/info/8971ac4d680643d6a526b98f26eae796 as hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/8971ac4d680643d6a526b98f26eae796 2023-06-08 18:56:02,599 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/8971ac4d680643d6a526b98f26eae796, entries=5, sequenceid=12, filesize=10.0 K 2023-06-08 18:56:02,600 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 475489ca47d830e9e063ab452deba395 in 498ms, sequenceid=12, compaction requested=false 2023-06-08 18:56:02,601 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 475489ca47d830e9e063ab452deba395: 2023-06-08 18:56:02,740 WARN [Listener at localhost.localdomain/42281] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:02,743 WARN [Listener at localhost.localdomain/42281] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:02,745 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:02,753 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/java.io.tmpdir/Jetty_localhost_42387_datanode____.tszeg5/webapp 2023-06-08 18:56:02,838 INFO [Listener at localhost.localdomain/42281] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42387 2023-06-08 18:56:02,849 WARN [Listener at localhost.localdomain/42749] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:02,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x43f1fddccb36cc82: Processing first storage report for DS-0ce6a724-ca49-43f5-807f-25662b92b3c0 from datanode 5148ade7-3339-43bb-b7ef-d9e85ffd8826 2023-06-08 18:56:02,936 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x43f1fddccb36cc82: from storage DS-0ce6a724-ca49-43f5-807f-25662b92b3c0 node DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:56:02,937 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x43f1fddccb36cc82: Processing first storage report for DS-abb882cd-7e27-4ee3-ad3a-a4f1a42fae46 from datanode 5148ade7-3339-43bb-b7ef-d9e85ffd8826 2023-06-08 18:56:02,937 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x43f1fddccb36cc82: from storage DS-abb882cd-7e27-4ee3-ad3a-a4f1a42fae46 node DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:03,578 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@24ad5c44] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33635, datanodeUuid=cfed3381-ec20-46c4-8842-44667a8dc53b, infoPort=46555, infoSecurePort=0, ipcPort=42281, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741842_1024 to 127.0.0.1:35211 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:03,578 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4c87d191] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33635, datanodeUuid=cfed3381-ec20-46c4-8842-44667a8dc53b, infoPort=46555, infoSecurePort=0, ipcPort=42281, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741852_1034 to 127.0.0.1:41341 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:03,930 WARN [master/jenkins-hbase17:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:03,931 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C35289%2C1686250533613:(num 1686250533808) roll requested 2023-06-08 18:56:03,935 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741858_1040 2023-06-08 18:56:03,936 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:03,937 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK] 2023-06-08 18:56:03,937 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:03,938 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741859_1041 2023-06-08 18:56:03,939 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39905,DS-839b40f3-38a2-487b-bda4-0238473e7517,DISK] 2023-06-08 18:56:03,941 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_68184238_17 at /127.0.0.1:44682 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data4/current]'}, localName='127.0.0.1:38637', datanodeUuid='5148ade7-3339-43bb-b7ef-d9e85ffd8826', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741860_1042 to mirror 127.0.0.1:35211: java.net.ConnectException: Connection refused 2023-06-08 18:56:03,941 WARN [Thread-701] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741860_1042 2023-06-08 18:56:03,941 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_68184238_17 at /127.0.0.1:44682 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:38637:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44682 dst: /127.0.0.1:38637 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:03,942 WARN [Thread-701] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK] 2023-06-08 18:56:03,948 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-08 18:56:03,948 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613/jenkins-hbase17.apache.org%2C35289%2C1686250533613.1686250533808 with entries=88, filesize=43.74 KB; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613/jenkins-hbase17.apache.org%2C35289%2C1686250533613.1686250563931 2023-06-08 18:56:03,948 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38637,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK], DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK]] 2023-06-08 18:56:03,948 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613/jenkins-hbase17.apache.org%2C35289%2C1686250533613.1686250533808 is not closed yet, will try archiving it next time 2023-06-08 18:56:03,948 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:03,949 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613/jenkins-hbase17.apache.org%2C35289%2C1686250533613.1686250533808; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:04,578 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@34f8ae57] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33635, datanodeUuid=cfed3381-ec20-46c4-8842-44667a8dc53b, infoPort=46555, infoSecurePort=0, ipcPort=42281, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741848_1030 to 127.0.0.1:35211 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:14,936 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@29a80d54] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741835_1011 to 127.0.0.1:39905 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:15,937 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@515cc62b] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741831_1007 to 127.0.0.1:39905 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:17,938 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@14e2074c] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741828_1004 to 127.0.0.1:35211 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:17,938 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7d496a29] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741826_1002 to 127.0.0.1:35211 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:20,938 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4a139635] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741825_1001 to 127.0.0.1:35211 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:20,938 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7342eef5] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38637, datanodeUuid=5148ade7-3339-43bb-b7ef-d9e85ffd8826, infoPort=45129, infoSecurePort=0, ipcPort=42749, storageInfo=lv=-57;cid=testClusterID;nsid=1786222594;c=1686250532717):Failed to transfer BP-1651463838-136.243.18.41-1686250532717:blk_1073741836_1012 to 127.0.0.1:35211 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:21,343 INFO [Listener at localhost.localdomain/42749] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250562329 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250581331 2023-06-08 18:56:21,343 DEBUG [Listener at localhost.localdomain/42749] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK], DatanodeInfoWithStorage[127.0.0.1:38637,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]] 2023-06-08 18:56:21,344 DEBUG [Listener at localhost.localdomain/42749] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250562329 is not closed yet, will try archiving it next time 2023-06-08 18:56:21,344 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 to hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/oldWALs/jenkins-hbase17.apache.org%2C36375%2C1686250534947.1686250545891 2023-06-08 18:56:21,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36375] regionserver.HRegion(9158): Flush requested on 475489ca47d830e9e063ab452deba395 2023-06-08 18:56:21,352 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 475489ca47d830e9e063ab452deba395 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-06-08 18:56:21,359 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-08 18:56:21,376 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 18:56:21,376 INFO [Listener at localhost.localdomain/42749] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 18:56:21,376 DEBUG [Listener at localhost.localdomain/42749] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fb83ba8 to 127.0.0.1:63926 2023-06-08 18:56:21,376 DEBUG [Listener at localhost.localdomain/42749] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,376 DEBUG [Listener at localhost.localdomain/42749] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 18:56:21,376 DEBUG [Listener at localhost.localdomain/42749] util.JVMClusterUtil(257): Found active master hash=1843753384, stopped=false 2023-06-08 18:56:21,377 INFO [Listener at localhost.localdomain/42749] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:56:21,378 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:56:21,378 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:21,379 INFO [Listener at localhost.localdomain/42749] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 18:56:21,379 DEBUG [Listener at localhost.localdomain/42749] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2b0f1d0c to 127.0.0.1:63926 2023-06-08 18:56:21,379 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:56:21,379 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:56:21,379 DEBUG [Listener at localhost.localdomain/42749] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,379 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/.tmp/info/9ad7c4fa1d054df9985a7f1fd8977e40 2023-06-08 18:56:21,380 INFO [Listener at localhost.localdomain/42749] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,41765,1686250533689' ***** 2023-06-08 18:56:21,380 INFO [Listener at localhost.localdomain/42749] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:56:21,380 INFO [Listener at localhost.localdomain/42749] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,36375,1686250534947' ***** 2023-06-08 18:56:21,380 INFO [Listener at localhost.localdomain/42749] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:56:21,380 INFO [RS:0;jenkins-hbase17:41765] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:56:21,380 INFO [RS:1;jenkins-hbase17:36375] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:56:21,380 INFO [RS:0;jenkins-hbase17:41765] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:56:21,380 INFO [RS:0;jenkins-hbase17:41765] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:56:21,380 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:56:21,380 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(3303): Received CLOSE for 62d543670fdace48ca11e453928cc34f 2023-06-08 18:56:21,381 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:56:21,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:56:21,381 DEBUG [RS:0;jenkins-hbase17:41765] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67b1ab40 to 127.0.0.1:63926 2023-06-08 18:56:21,381 DEBUG [RS:0;jenkins-hbase17:41765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:56:21,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:56:21,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 62d543670fdace48ca11e453928cc34f, disabling compactions & flushes 2023-06-08 18:56:21,383 INFO [RS:0;jenkins-hbase17:41765] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:56:21,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,383 INFO [RS:0;jenkins-hbase17:41765] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:56:21,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,383 INFO [RS:0;jenkins-hbase17:41765] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:56:21,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. after waiting 0 ms 2023-06-08 18:56:21,384 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:56:21,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 62d543670fdace48ca11e453928cc34f 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 18:56:21,384 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-08 18:56:21,384 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 62d543670fdace48ca11e453928cc34f=hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f.} 2023-06-08 18:56:21,385 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1504): Waiting on 1588230740, 62d543670fdace48ca11e453928cc34f 2023-06-08 18:56:21,385 WARN [RS:0;jenkins-hbase17:41765.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:56:21,386 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:56:21,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:56:21,386 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C41765%2C1686250533689:(num 1686250534198) roll requested 2023-06-08 18:56:21,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:56:21,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:56:21,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 62d543670fdace48ca11e453928cc34f: 2023-06-08 18:56:21,386 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-06-08 18:56:21,386 WARN [RS_OPEN_META-regionserver/jenkins-hbase17:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,387 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase17.apache.org,41765,1686250533689: Unrecoverable exception while closing hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:56:21,388 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-08 18:56:21,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 18:56:21,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-08 18:56:21,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-08 18:56:21,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-08 18:56:21,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-08 18:56:21,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1026555904, "init": 524288000, "max": 2051014656, "used": 326178920 }, "NonHeapMemoryUsage": { "committed": 134021120, "init": 2555904, "max": -1, "used": 131612200 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-08 18:56:21,398 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:39480 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data4/current]'}, localName='127.0.0.1:38637', datanodeUuid='5148ade7-3339-43bb-b7ef-d9e85ffd8826', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741864_1046 to mirror 127.0.0.1:35211: java.net.ConnectException: Connection refused 2023-06-08 18:56:21,398 WARN [Thread-736] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741864_1046 2023-06-08 18:56:21,398 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:39480 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:38637:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39480 dst: /127.0.0.1:38637 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:21,399 WARN [Thread-736] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK] 2023-06-08 18:56:21,399 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/.tmp/info/9ad7c4fa1d054df9985a7f1fd8977e40 as hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/9ad7c4fa1d054df9985a7f1fd8977e40 2023-06-08 18:56:21,405 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35289] master.MasterRpcServices(609): jenkins-hbase17.apache.org,41765,1686250533689 reported a fatal error: ***** ABORTING region server jenkins-hbase17.apache.org,41765,1686250533689: Unrecoverable exception while closing hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,412 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-08 18:56:21,412 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.1686250534198 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.1686250581386 2023-06-08 18:56:21,413 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/9ad7c4fa1d054df9985a7f1fd8977e40, entries=8, sequenceid=25, filesize=13.2 K 2023-06-08 18:56:21,413 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK], DatanodeInfoWithStorage[127.0.0.1:38637,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]] 2023-06-08 18:56:21,413 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.1686250534198 is not closed yet, will try archiving it next time 2023-06-08 18:56:21,413 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,413 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta:.meta(num 1686250534323) roll requested 2023-06-08 18:56:21,413 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.1686250534198; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,414 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 475489ca47d830e9e063ab452deba395 in 62ms, sequenceid=25, compaction requested=false 2023-06-08 18:56:21,415 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 475489ca47d830e9e063ab452deba395: 2023-06-08 18:56:21,415 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-06-08 18:56:21,415 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:56:21,415 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/9ad7c4fa1d054df9985a7f1fd8977e40 because midkey is the same as first or last row 2023-06-08 18:56:21,415 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:56:21,415 INFO [RS:1;jenkins-hbase17:36375] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:56:21,415 INFO [RS:1;jenkins-hbase17:36375] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:56:21,415 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(3303): Received CLOSE for 475489ca47d830e9e063ab452deba395 2023-06-08 18:56:21,415 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:56:21,416 DEBUG [RS:1;jenkins-hbase17:36375] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40c24db7 to 127.0.0.1:63926 2023-06-08 18:56:21,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 475489ca47d830e9e063ab452deba395, disabling compactions & flushes 2023-06-08 18:56:21,416 DEBUG [RS:1;jenkins-hbase17:36375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:56:21,416 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-06-08 18:56:21,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:56:21,416 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1478): Online Regions={475489ca47d830e9e063ab452deba395=TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395.} 2023-06-08 18:56:21,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. after waiting 0 ms 2023-06-08 18:56:21,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:56:21,416 DEBUG [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1504): Waiting on 475489ca47d830e9e063ab452deba395 2023-06-08 18:56:21,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 475489ca47d830e9e063ab452deba395 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-08 18:56:21,417 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:39504 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741866_1048]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data4/current]'}, localName='127.0.0.1:38637', datanodeUuid='5148ade7-3339-43bb-b7ef-d9e85ffd8826', xmitsInProgress=0}:Exception transfering block BP-1651463838-136.243.18.41-1686250532717:blk_1073741866_1048 to mirror 127.0.0.1:35211: java.net.ConnectException: Connection refused 2023-06-08 18:56:21,417 WARN [Thread-745] hdfs.DataStreamer(1658): Abandoning BP-1651463838-136.243.18.41-1686250532717:blk_1073741866_1048 2023-06-08 18:56:21,417 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_2091756776_17 at /127.0.0.1:39504 [Receiving block BP-1651463838-136.243.18.41-1686250532717:blk_1073741866_1048]] datanode.DataXceiver(323): 127.0.0.1:38637:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39504 dst: /127.0.0.1:38637 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:21,418 WARN [Thread-745] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35211,DS-e076b889-57b6-448a-8020-9c29e2da26b5,DISK] 2023-06-08 18:56:21,431 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-08 18:56:21,432 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta.1686250534323.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta.1686250581413.meta 2023-06-08 18:56:21,432 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33635,DS-88b38ff7-810c-4caf-8928-d261dba89922,DISK], DatanodeInfoWithStorage[127.0.0.1:38637,DS-0ce6a724-ca49-43f5-807f-25662b92b3c0,DISK]] 2023-06-08 18:56:21,432 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,432 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta.1686250534323.meta is not closed yet, will try archiving it next time 2023-06-08 18:56:21,432 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689/jenkins-hbase17.apache.org%2C41765%2C1686250533689.meta.1686250534323.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41341,DS-a01bb835-b1c2-485e-ba09-f1de42c50d2c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:21,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/.tmp/info/f0a7d0545b90491a966cb368a87f61a7 2023-06-08 18:56:21,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/.tmp/info/f0a7d0545b90491a966cb368a87f61a7 as hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/f0a7d0545b90491a966cb368a87f61a7 2023-06-08 18:56:21,451 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/info/f0a7d0545b90491a966cb368a87f61a7, entries=9, sequenceid=37, filesize=14.2 K 2023-06-08 18:56:21,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 475489ca47d830e9e063ab452deba395 in 36ms, sequenceid=37, compaction requested=true 2023-06-08 18:56:21,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/data/default/TestLogRolling-testLogRollOnDatanodeDeath/475489ca47d830e9e063ab452deba395/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-06-08 18:56:21,459 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:56:21,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 475489ca47d830e9e063ab452deba395: 2023-06-08 18:56:21,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686250535055.475489ca47d830e9e063ab452deba395. 2023-06-08 18:56:21,585 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:56:21,585 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(3303): Received CLOSE for 62d543670fdace48ca11e453928cc34f 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:56:21,585 DEBUG [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1504): Waiting on 1588230740, 62d543670fdace48ca11e453928cc34f 2023-06-08 18:56:21,585 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 62d543670fdace48ca11e453928cc34f, disabling compactions & flushes 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:56:21,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:56:21,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. after waiting 0 ms 2023-06-08 18:56:21,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,586 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:56:21,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 62d543670fdace48ca11e453928cc34f: 2023-06-08 18:56:21,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686250534387.62d543670fdace48ca11e453928cc34f. 2023-06-08 18:56:21,586 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 18:56:21,616 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36375,1686250534947; all regions closed. 2023-06-08 18:56:21,617 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:56:21,629 DEBUG [RS:1;jenkins-hbase17:36375] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/oldWALs 2023-06-08 18:56:21,630 INFO [RS:1;jenkins-hbase17:36375] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C36375%2C1686250534947:(num 1686250581331) 2023-06-08 18:56:21,630 DEBUG [RS:1;jenkins-hbase17:36375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,630 INFO [RS:1;jenkins-hbase17:36375] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:56:21,630 INFO [RS:1;jenkins-hbase17:36375] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 18:56:21,631 INFO [RS:1;jenkins-hbase17:36375] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:56:21,631 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:56:21,631 INFO [RS:1;jenkins-hbase17:36375] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:56:21,631 INFO [RS:1;jenkins-hbase17:36375] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:56:21,632 INFO [RS:1;jenkins-hbase17:36375] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36375 2023-06-08 18:56:21,636 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:56:21,636 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:56:21,636 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,36375,1686250534947 2023-06-08 18:56:21,636 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:56:21,636 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:56:21,638 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,36375,1686250534947] 2023-06-08 18:56:21,638 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,36375,1686250534947; numProcessing=1 2023-06-08 18:56:21,639 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,36375,1686250534947 already deleted, retry=false 2023-06-08 18:56:21,640 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,36375,1686250534947 expired; onlineServers=1 2023-06-08 18:56:21,785 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-08 18:56:21,785 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,41765,1686250533689; all regions closed. 2023-06-08 18:56:21,786 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:56:21,793 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/WALs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:56:21,799 DEBUG [RS:0;jenkins-hbase17:41765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,799 INFO [RS:0;jenkins-hbase17:41765] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:56:21,799 INFO [RS:0;jenkins-hbase17:41765] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 18:56:21,799 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:56:21,800 INFO [RS:0;jenkins-hbase17:41765] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:41765 2023-06-08 18:56:21,801 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:56:21,801 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,41765,1686250533689 2023-06-08 18:56:21,802 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,41765,1686250533689] 2023-06-08 18:56:21,802 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,41765,1686250533689; numProcessing=2 2023-06-08 18:56:21,803 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,41765,1686250533689 already deleted, retry=false 2023-06-08 18:56:21,803 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,41765,1686250533689 expired; onlineServers=0 2023-06-08 18:56:21,803 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,35289,1686250533613' ***** 2023-06-08 18:56:21,803 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 18:56:21,804 DEBUG [M:0;jenkins-hbase17:35289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69dae8cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:56:21,804 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:56:21,804 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,35289,1686250533613; all regions closed. 2023-06-08 18:56:21,804 DEBUG [M:0;jenkins-hbase17:35289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:56:21,805 DEBUG [M:0;jenkins-hbase17:35289] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 18:56:21,805 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 18:56:21,805 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250533931] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250533931,5,FailOnTimeoutGroup] 2023-06-08 18:56:21,805 DEBUG [M:0;jenkins-hbase17:35289] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 18:56:21,805 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250533931] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250533931,5,FailOnTimeoutGroup] 2023-06-08 18:56:21,805 INFO [M:0;jenkins-hbase17:35289] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 18:56:21,806 INFO [M:0;jenkins-hbase17:35289] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 18:56:21,806 INFO [M:0;jenkins-hbase17:35289] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-06-08 18:56:21,806 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 18:56:21,806 DEBUG [M:0;jenkins-hbase17:35289] master.HMaster(1512): Stopping service threads 2023-06-08 18:56:21,806 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:21,806 INFO [M:0;jenkins-hbase17:35289] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 18:56:21,807 ERROR [M:0;jenkins-hbase17:35289] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 18:56:21,807 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:56:21,807 INFO [M:0;jenkins-hbase17:35289] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 18:56:21,807 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 18:56:21,808 DEBUG [M:0;jenkins-hbase17:35289] zookeeper.ZKUtil(398): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 18:56:21,808 WARN [M:0;jenkins-hbase17:35289] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 18:56:21,808 INFO [M:0;jenkins-hbase17:35289] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 18:56:21,809 INFO [M:0;jenkins-hbase17:35289] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 18:56:21,809 DEBUG [M:0;jenkins-hbase17:35289] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:56:21,809 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:21,809 DEBUG [M:0;jenkins-hbase17:35289] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:21,809 DEBUG [M:0;jenkins-hbase17:35289] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:56:21,809 DEBUG [M:0;jenkins-hbase17:35289] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:21,809 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.11 KB heapSize=45.77 KB 2023-06-08 18:56:21,827 INFO [M:0;jenkins-hbase17:35289] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.11 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3fe46922ec9445608c2122ac594acc67 2023-06-08 18:56:21,833 DEBUG [M:0;jenkins-hbase17:35289] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3fe46922ec9445608c2122ac594acc67 as hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3fe46922ec9445608c2122ac594acc67 2023-06-08 18:56:21,839 INFO [M:0;jenkins-hbase17:35289] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42703/user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3fe46922ec9445608c2122ac594acc67, entries=11, sequenceid=92, filesize=7.0 K 2023-06-08 18:56:21,840 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegion(2948): Finished flush of dataSize ~38.11 KB/39023, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=92, compaction requested=false 2023-06-08 18:56:21,842 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:21,842 DEBUG [M:0;jenkins-hbase17:35289] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:56:21,842 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/8ffb7900-7b06-9033-5109-cab896fa2fe6/MasterData/WALs/jenkins-hbase17.apache.org,35289,1686250533613 2023-06-08 18:56:21,848 INFO [M:0;jenkins-hbase17:35289] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 18:56:21,848 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:56:21,849 INFO [M:0;jenkins-hbase17:35289] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:35289 2023-06-08 18:56:21,850 DEBUG [M:0;jenkins-hbase17:35289] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,35289,1686250533613 already deleted, retry=false 2023-06-08 18:56:21,881 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:56:21,881 INFO [RS:1;jenkins-hbase17:36375] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36375,1686250534947; zookeeper connection closed. 2023-06-08 18:56:21,881 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:36375-0x100abcaadde0005, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:56:21,882 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@bf50d6f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@bf50d6f 2023-06-08 18:56:21,982 INFO [M:0;jenkins-hbase17:35289] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,35289,1686250533613; zookeeper connection closed. 2023-06-08 18:56:21,982 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:56:21,982 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): master:35289-0x100abcaadde0000, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:56:22,074 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:56:22,082 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:56:22,082 DEBUG [Listener at localhost.localdomain/44337-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x100abcaadde0001, quorum=127.0.0.1:63926, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:56:22,082 INFO [RS:0;jenkins-hbase17:41765] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,41765,1686250533689; zookeeper connection closed. 2023-06-08 18:56:22,083 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@32d50cc1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@32d50cc1 2023-06-08 18:56:22,084 INFO [Listener at localhost.localdomain/42749] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-08 18:56:22,084 WARN [Listener at localhost.localdomain/42749] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:22,089 INFO [Listener at localhost.localdomain/42749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:22,194 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:22,195 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651463838-136.243.18.41-1686250532717 (Datanode Uuid 5148ade7-3339-43bb-b7ef-d9e85ffd8826) service to localhost.localdomain/127.0.0.1:42703 2023-06-08 18:56:22,195 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data3/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:22,196 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data4/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:22,199 WARN [Listener at localhost.localdomain/42749] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:22,215 INFO [Listener at localhost.localdomain/42749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:22,319 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:22,319 WARN [BP-1651463838-136.243.18.41-1686250532717 heartbeating to localhost.localdomain/127.0.0.1:42703] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651463838-136.243.18.41-1686250532717 (Datanode Uuid cfed3381-ec20-46c4-8842-44667a8dc53b) service to localhost.localdomain/127.0.0.1:42703 2023-06-08 18:56:22,320 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data9/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:22,320 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/cluster_b99a2616-cab3-0f0b-e54f-fc7bbe7c57e5/dfs/data/data10/current/BP-1651463838-136.243.18.41-1686250532717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:22,333 INFO [Listener at localhost.localdomain/42749] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 18:56:22,455 INFO [Listener at localhost.localdomain/42749] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 18:56:22,507 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 18:56:22,521 INFO [Listener at localhost.localdomain/42749] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=77 (was 51) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase17:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:42703 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/42749 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:42703 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:42703 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:42703 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:42703 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:42703 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:42703 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=471 (was 438) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=294 (was 319), ProcessCount=186 (was 186), AvailableMemoryMB=2044 (was 1887) - AvailableMemoryMB LEAK? - 2023-06-08 18:56:22,532 INFO [Listener at localhost.localdomain/42749] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=77, OpenFileDescriptor=471, MaxFileDescriptor=60000, SystemLoadAverage=294, ProcessCount=186, AvailableMemoryMB=2042 2023-06-08 18:56:22,532 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 18:56:22,532 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/hadoop.log.dir so I do NOT create it in target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927 2023-06-08 18:56:22,533 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b060a5f6-f311-53ab-0c69-2b64826043be/hadoop.tmp.dir so I do NOT create it in target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927 2023-06-08 18:56:22,533 INFO [Listener at localhost.localdomain/42749] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22, deleteOnExit=true 2023-06-08 18:56:22,533 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 18:56:22,533 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/test.cache.data in system properties and HBase conf 2023-06-08 18:56:22,533 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 18:56:22,534 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/hadoop.log.dir in system properties and HBase conf 2023-06-08 18:56:22,534 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 18:56:22,534 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 18:56:22,534 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 18:56:22,534 DEBUG [Listener at localhost.localdomain/42749] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 18:56:22,535 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:56:22,535 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:56:22,535 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 18:56:22,535 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:56:22,535 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 18:56:22,535 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 18:56:22,536 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:56:22,536 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:56:22,536 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 18:56:22,536 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/nfs.dump.dir in system properties and HBase conf 2023-06-08 18:56:22,536 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir in system properties and HBase conf 2023-06-08 18:56:22,536 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:56:22,537 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 18:56:22,537 INFO [Listener at localhost.localdomain/42749] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 18:56:22,539 WARN [Listener at localhost.localdomain/42749] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:56:22,540 WARN [Listener at localhost.localdomain/42749] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:56:22,541 WARN [Listener at localhost.localdomain/42749] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:56:22,573 WARN [Listener at localhost.localdomain/42749] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:22,577 INFO [Listener at localhost.localdomain/42749] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:22,595 INFO [Listener at localhost.localdomain/42749] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_localdomain_39463_hdfs____.frk76j/webapp 2023-06-08 18:56:22,699 INFO [Listener at localhost.localdomain/42749] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:39463 2023-06-08 18:56:22,701 WARN [Listener at localhost.localdomain/42749] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:56:22,703 WARN [Listener at localhost.localdomain/42749] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:56:22,703 WARN [Listener at localhost.localdomain/42749] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:56:22,741 WARN [Listener at localhost.localdomain/33233] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:22,752 WARN [Listener at localhost.localdomain/33233] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:22,755 WARN [Listener at localhost.localdomain/33233] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:22,757 INFO [Listener at localhost.localdomain/33233] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:22,762 INFO [Listener at localhost.localdomain/33233] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_40803_datanode____.f1wlsq/webapp 2023-06-08 18:56:22,852 INFO [Listener at localhost.localdomain/33233] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40803 2023-06-08 18:56:22,866 WARN [Listener at localhost.localdomain/43577] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:22,914 WARN [Listener at localhost.localdomain/43577] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:22,921 WARN [Listener at localhost.localdomain/43577] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:22,923 INFO [Listener at localhost.localdomain/43577] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:22,934 INFO [Listener at localhost.localdomain/43577] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_33899_datanode____.p2cxep/webapp 2023-06-08 18:56:22,965 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6596bfdc15899dc7: Processing first storage report for DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a from datanode e692bda4-3fa0-48fb-a0cc-08b6e7f0f404 2023-06-08 18:56:22,965 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6596bfdc15899dc7: from storage DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a node DatanodeRegistration(127.0.0.1:44269, datanodeUuid=e692bda4-3fa0-48fb-a0cc-08b6e7f0f404, infoPort=45283, infoSecurePort=0, ipcPort=43577, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:22,966 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6596bfdc15899dc7: Processing first storage report for DS-6caf5336-8392-4292-b82e-e32680f388b2 from datanode e692bda4-3fa0-48fb-a0cc-08b6e7f0f404 2023-06-08 18:56:22,966 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6596bfdc15899dc7: from storage DS-6caf5336-8392-4292-b82e-e32680f388b2 node DatanodeRegistration(127.0.0.1:44269, datanodeUuid=e692bda4-3fa0-48fb-a0cc-08b6e7f0f404, infoPort=45283, infoSecurePort=0, ipcPort=43577, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:23,017 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:56:23,029 INFO [Listener at localhost.localdomain/43577] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33899 2023-06-08 18:56:23,039 WARN [Listener at localhost.localdomain/35315] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:23,118 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe435e80ef35446b9: Processing first storage report for DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7 from datanode dfebf782-3cb5-4925-b839-592680324388 2023-06-08 18:56:23,118 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe435e80ef35446b9: from storage DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7 node DatanodeRegistration(127.0.0.1:46353, datanodeUuid=dfebf782-3cb5-4925-b839-592680324388, infoPort=46103, infoSecurePort=0, ipcPort=35315, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:23,118 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe435e80ef35446b9: Processing first storage report for DS-f752b46a-7b86-4035-b674-0f6cdd0b0cf2 from datanode dfebf782-3cb5-4925-b839-592680324388 2023-06-08 18:56:23,118 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe435e80ef35446b9: from storage DS-f752b46a-7b86-4035-b674-0f6cdd0b0cf2 node DatanodeRegistration(127.0.0.1:46353, datanodeUuid=dfebf782-3cb5-4925-b839-592680324388, infoPort=46103, infoSecurePort=0, ipcPort=35315, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:23,153 DEBUG [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927 2023-06-08 18:56:23,158 INFO [Listener at localhost.localdomain/35315] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/zookeeper_0, clientPort=53036, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 18:56:23,163 INFO [Listener at localhost.localdomain/35315] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53036 2023-06-08 18:56:23,163 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,165 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,218 INFO [Listener at localhost.localdomain/35315] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32 with version=8 2023-06-08 18:56:23,218 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase-staging 2023-06-08 18:56:23,220 INFO [Listener at localhost.localdomain/35315] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:56:23,220 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:56:23,220 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:56:23,220 INFO [Listener at localhost.localdomain/35315] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:56:23,221 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:56:23,221 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:56:23,221 INFO [Listener at localhost.localdomain/35315] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:56:23,228 INFO [Listener at localhost.localdomain/35315] ipc.NettyRpcServer(120): Bind to /136.243.18.41:46567 2023-06-08 18:56:23,229 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,230 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,232 INFO [Listener at localhost.localdomain/35315] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46567 connecting to ZooKeeper ensemble=127.0.0.1:53036 2023-06-08 18:56:23,245 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:465670x0, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:56:23,251 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46567-0x100abcb6f930000 connected 2023-06-08 18:56:23,292 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:56:23,292 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:56:23,293 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:56:23,304 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46567 2023-06-08 18:56:23,304 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46567 2023-06-08 18:56:23,308 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46567 2023-06-08 18:56:23,309 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46567 2023-06-08 18:56:23,309 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46567 2023-06-08 18:56:23,310 INFO [Listener at localhost.localdomain/35315] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32, hbase.cluster.distributed=false 2023-06-08 18:56:23,326 INFO [Listener at localhost.localdomain/35315] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:56:23,327 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:56:23,327 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:56:23,327 INFO [Listener at localhost.localdomain/35315] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:56:23,327 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:56:23,327 INFO [Listener at localhost.localdomain/35315] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:56:23,328 INFO [Listener at localhost.localdomain/35315] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:56:23,336 INFO [Listener at localhost.localdomain/35315] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34297 2023-06-08 18:56:23,337 INFO [Listener at localhost.localdomain/35315] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:56:23,338 DEBUG [Listener at localhost.localdomain/35315] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:56:23,339 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,341 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,343 INFO [Listener at localhost.localdomain/35315] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34297 connecting to ZooKeeper ensemble=127.0.0.1:53036 2023-06-08 18:56:23,351 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): regionserver:342970x0, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:56:23,352 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): regionserver:342970x0, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:56:23,353 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:342970x0, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:56:23,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34297-0x100abcb6f930001 connected 2023-06-08 18:56:23,355 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ZKUtil(164): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:56:23,355 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34297 2023-06-08 18:56:23,355 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34297 2023-06-08 18:56:23,356 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34297 2023-06-08 18:56:23,361 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34297 2023-06-08 18:56:23,361 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34297 2023-06-08 18:56:23,363 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,366 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:56:23,366 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,370 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:56:23,370 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:23,372 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:56:23,372 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:56:23,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:56:23,374 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,46567,1686250583219 from backup master directory 2023-06-08 18:56:23,375 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,375 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:56:23,375 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:56:23,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,400 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/hbase.id with ID: 8497aef3-cab6-4256-9529-7c2d77e6c48a 2023-06-08 18:56:23,416 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:23,422 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:23,467 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x04cca221 to 127.0.0.1:53036 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:56:23,485 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3adce32c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:56:23,485 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:56:23,486 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 18:56:23,487 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:56:23,489 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store-tmp 2023-06-08 18:56:23,502 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:23,502 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:56:23,502 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:23,502 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:23,502 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:56:23,502 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:23,502 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:56:23,502 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:56:23,504 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,508 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46567%2C1686250583219, suffix=, logDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219, archiveDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/oldWALs, maxLogs=10 2023-06-08 18:56:23,556 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219/jenkins-hbase17.apache.org%2C46567%2C1686250583219.1686250583512 2023-06-08 18:56:23,556 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:56:23,556 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:56:23,557 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:23,557 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:56:23,557 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:56:23,570 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:56:23,575 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 18:56:23,576 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 18:56:23,580 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:23,585 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:56:23,587 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:56:23,608 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:56:23,611 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:56:23,612 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=703822, jitterRate=-0.1050446629524231}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:56:23,612 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:56:23,613 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 18:56:23,617 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 18:56:23,617 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 18:56:23,617 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 18:56:23,618 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 18:56:23,618 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 18:56:23,618 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 18:56:23,619 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 18:56:23,620 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 18:56:23,634 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 18:56:23,635 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 18:56:23,638 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 18:56:23,638 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 18:56:23,639 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 18:56:23,641 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:23,644 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 18:56:23,645 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 18:56:23,646 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 18:56:23,653 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:56:23,653 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:23,654 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:56:23,657 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,46567,1686250583219, sessionid=0x100abcb6f930000, setting cluster-up flag (Was=false) 2023-06-08 18:56:23,678 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:23,700 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 18:56:23,713 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,725 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:23,728 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 18:56:23,729 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:23,730 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.hbase-snapshot/.tmp 2023-06-08 18:56:23,745 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 18:56:23,745 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:56:23,746 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,760 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686250613760 2023-06-08 18:56:23,761 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 18:56:23,765 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 18:56:23,766 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 18:56:23,766 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 18:56:23,766 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 18:56:23,766 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 18:56:23,780 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,800 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 18:56:23,800 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 18:56:23,800 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:56:23,805 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 18:56:23,805 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 18:56:23,806 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 18:56:23,806 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 18:56:23,808 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250583806,5,FailOnTimeoutGroup] 2023-06-08 18:56:23,810 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:56:23,812 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250583808,5,FailOnTimeoutGroup] 2023-06-08 18:56:23,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 18:56:23,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,812 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(951): ClusterId : 8497aef3-cab6-4256-9529-7c2d77e6c48a 2023-06-08 18:56:23,812 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,829 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:56:23,839 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:56:23,839 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:56:23,842 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:56:23,850 DEBUG [RS:0;jenkins-hbase17:34297] zookeeper.ReadOnlyZKClient(139): Connect 0x599a4762 to 127.0.0.1:53036 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:56:23,855 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:56:23,856 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:56:23,856 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32 2023-06-08 18:56:23,864 DEBUG [RS:0;jenkins-hbase17:34297] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5da5c889, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:56:23,865 DEBUG [RS:0;jenkins-hbase17:34297] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18a74d59, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:56:23,895 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:34297 2023-06-08 18:56:23,895 INFO [RS:0;jenkins-hbase17:34297] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:56:23,895 INFO [RS:0;jenkins-hbase17:34297] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:56:23,895 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:56:23,897 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,46567,1686250583219 with isa=jenkins-hbase17.apache.org/136.243.18.41:34297, startcode=1686250583326 2023-06-08 18:56:23,898 DEBUG [RS:0;jenkins-hbase17:34297] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:56:23,904 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:23,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:56:23,919 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/info 2023-06-08 18:56:23,920 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:56:23,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:23,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:56:23,924 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:40105, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:56:23,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:23,926 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32 2023-06-08 18:56:23,926 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33233 2023-06-08 18:56:23,926 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:56:23,927 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:56:23,928 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:56:23,928 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:56:23,928 DEBUG [RS:0;jenkins-hbase17:34297] zookeeper.ZKUtil(162): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:23,928 WARN [RS:0;jenkins-hbase17:34297] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:56:23,929 INFO [RS:0;jenkins-hbase17:34297] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:56:23,929 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:23,930 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:23,930 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:56:23,930 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,34297,1686250583326] 2023-06-08 18:56:23,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/table 2023-06-08 18:56:23,939 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:56:23,939 DEBUG [RS:0;jenkins-hbase17:34297] zookeeper.ZKUtil(162): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:23,940 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:23,940 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:56:23,941 INFO [RS:0;jenkins-hbase17:34297] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:56:23,949 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740 2023-06-08 18:56:23,949 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740 2023-06-08 18:56:23,952 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:56:23,954 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:56:23,967 INFO [RS:0;jenkins-hbase17:34297] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:56:23,968 INFO [RS:0;jenkins-hbase17:34297] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:56:23,968 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,975 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:56:23,978 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,978 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,978 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,979 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,979 DEBUG [RS:0;jenkins-hbase17:34297] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:56:23,979 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=707743, jitterRate=-0.10005827248096466}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:56:23,979 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:56:23,979 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:56:23,980 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:56:23,981 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:56:23,980 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,981 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:56:23,981 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,981 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:56:23,981 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:23,984 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:56:23,984 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:56:23,985 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:56:23,985 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 18:56:23,986 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 18:56:23,988 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 18:56:23,990 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 18:56:24,000 INFO [RS:0;jenkins-hbase17:34297] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:56:24,000 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34297,1686250583326-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:24,014 INFO [RS:0;jenkins-hbase17:34297] regionserver.Replication(203): jenkins-hbase17.apache.org,34297,1686250583326 started 2023-06-08 18:56:24,014 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,34297,1686250583326, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:34297, sessionid=0x100abcb6f930001 2023-06-08 18:56:24,015 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:56:24,015 DEBUG [RS:0;jenkins-hbase17:34297] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:24,015 DEBUG [RS:0;jenkins-hbase17:34297] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34297,1686250583326' 2023-06-08 18:56:24,015 DEBUG [RS:0;jenkins-hbase17:34297] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:56:24,015 DEBUG [RS:0;jenkins-hbase17:34297] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:56:24,016 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:56:24,016 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:56:24,016 DEBUG [RS:0;jenkins-hbase17:34297] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:24,016 DEBUG [RS:0;jenkins-hbase17:34297] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34297,1686250583326' 2023-06-08 18:56:24,016 DEBUG [RS:0;jenkins-hbase17:34297] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:56:24,016 DEBUG [RS:0;jenkins-hbase17:34297] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:56:24,017 DEBUG [RS:0;jenkins-hbase17:34297] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:56:24,017 INFO [RS:0;jenkins-hbase17:34297] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:56:24,017 INFO [RS:0;jenkins-hbase17:34297] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:56:24,125 INFO [RS:0;jenkins-hbase17:34297] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34297%2C1686250583326, suffix=, logDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326, archiveDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/oldWALs, maxLogs=32 2023-06-08 18:56:24,140 DEBUG [jenkins-hbase17:46567] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 18:56:24,141 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34297,1686250583326, state=OPENING 2023-06-08 18:56:24,142 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 18:56:24,143 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:24,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34297,1686250583326}] 2023-06-08 18:56:24,143 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:56:24,153 INFO [RS:0;jenkins-hbase17:34297] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 2023-06-08 18:56:24,153 DEBUG [RS:0;jenkins-hbase17:34297] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:56:24,300 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:24,300 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:56:24,304 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38094, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:56:24,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 18:56:24,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:56:24,311 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326, archiveDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/oldWALs, maxLogs=32 2023-06-08 18:56:24,330 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta.1686250584320.meta 2023-06-08 18:56:24,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:56:24,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:56:24,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 18:56:24,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 18:56:24,331 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 18:56:24,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 18:56:24,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:24,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 18:56:24,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 18:56:24,341 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:56:24,343 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/info 2023-06-08 18:56:24,343 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/info 2023-06-08 18:56:24,343 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:56:24,344 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:24,344 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:56:24,345 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:56:24,345 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:56:24,346 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:56:24,346 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:24,347 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:56:24,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/table 2023-06-08 18:56:24,348 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/table 2023-06-08 18:56:24,348 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:56:24,348 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:24,350 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740 2023-06-08 18:56:24,351 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740 2023-06-08 18:56:24,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:56:24,356 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:56:24,357 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=722764, jitterRate=-0.08095873892307281}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:56:24,357 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:56:24,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686250584300 2023-06-08 18:56:24,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 18:56:24,364 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 18:56:24,364 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34297,1686250583326, state=OPEN 2023-06-08 18:56:24,366 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 18:56:24,366 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:56:24,369 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 18:56:24,369 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34297,1686250583326 in 223 msec 2023-06-08 18:56:24,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 18:56:24,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 384 msec 2023-06-08 18:56:24,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 634 msec 2023-06-08 18:56:24,374 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686250584374, completionTime=-1 2023-06-08 18:56:24,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 18:56:24,375 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 18:56:24,384 DEBUG [hconnection-0x1728427a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:56:24,387 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38098, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:56:24,394 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 18:56:24,394 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686250644394 2023-06-08 18:56:24,395 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686250704395 2023-06-08 18:56:24,395 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 19 msec 2023-06-08 18:56:24,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46567,1686250583219-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:24,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46567,1686250583219-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:24,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46567,1686250583219-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:24,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:46567, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:24,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 18:56:24,414 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 18:56:24,415 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:56:24,417 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 18:56:24,427 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 18:56:24,429 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:56:24,433 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:56:24,435 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,436 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb empty. 2023-06-08 18:56:24,441 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,441 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 18:56:24,478 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 18:56:24,481 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 88de32a5a86759b62951f20f99d10abb, NAME => 'hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp 2023-06-08 18:56:24,528 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:24,528 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 88de32a5a86759b62951f20f99d10abb, disabling compactions & flushes 2023-06-08 18:56:24,528 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,528 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,528 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. after waiting 0 ms 2023-06-08 18:56:24,528 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,529 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,529 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 88de32a5a86759b62951f20f99d10abb: 2023-06-08 18:56:24,531 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:56:24,532 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250584532"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250584532"}]},"ts":"1686250584532"} 2023-06-08 18:56:24,535 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:56:24,537 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:56:24,537 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250584537"}]},"ts":"1686250584537"} 2023-06-08 18:56:24,539 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 18:56:24,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=88de32a5a86759b62951f20f99d10abb, ASSIGN}] 2023-06-08 18:56:24,547 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=88de32a5a86759b62951f20f99d10abb, ASSIGN 2023-06-08 18:56:24,549 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=88de32a5a86759b62951f20f99d10abb, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,34297,1686250583326; forceNewPlan=false, retain=false 2023-06-08 18:56:24,701 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=88de32a5a86759b62951f20f99d10abb, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:24,701 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250584700"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250584700"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250584700"}]},"ts":"1686250584700"} 2023-06-08 18:56:24,703 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 88de32a5a86759b62951f20f99d10abb, server=jenkins-hbase17.apache.org,34297,1686250583326}] 2023-06-08 18:56:24,859 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88de32a5a86759b62951f20f99d10abb, NAME => 'hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:56:24,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:24,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,862 INFO [StoreOpener-88de32a5a86759b62951f20f99d10abb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,863 DEBUG [StoreOpener-88de32a5a86759b62951f20f99d10abb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb/info 2023-06-08 18:56:24,863 DEBUG [StoreOpener-88de32a5a86759b62951f20f99d10abb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb/info 2023-06-08 18:56:24,864 INFO [StoreOpener-88de32a5a86759b62951f20f99d10abb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88de32a5a86759b62951f20f99d10abb columnFamilyName info 2023-06-08 18:56:24,864 INFO [StoreOpener-88de32a5a86759b62951f20f99d10abb-1] regionserver.HStore(310): Store=88de32a5a86759b62951f20f99d10abb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:24,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:56:24,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/namespace/88de32a5a86759b62951f20f99d10abb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:56:24,872 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 88de32a5a86759b62951f20f99d10abb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=867508, jitterRate=0.10309363901615143}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:56:24,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 88de32a5a86759b62951f20f99d10abb: 2023-06-08 18:56:24,875 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb., pid=6, masterSystemTime=1686250584855 2023-06-08 18:56:24,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,878 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:56:24,879 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=88de32a5a86759b62951f20f99d10abb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:24,879 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250584879"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250584879"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250584879"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250584879"}]},"ts":"1686250584879"} 2023-06-08 18:56:24,885 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 18:56:24,885 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 88de32a5a86759b62951f20f99d10abb, server=jenkins-hbase17.apache.org,34297,1686250583326 in 178 msec 2023-06-08 18:56:24,892 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 18:56:24,892 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=88de32a5a86759b62951f20f99d10abb, ASSIGN in 342 msec 2023-06-08 18:56:24,893 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:56:24,893 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250584893"}]},"ts":"1686250584893"} 2023-06-08 18:56:24,895 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 18:56:24,897 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:56:24,900 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 482 msec 2023-06-08 18:56:24,925 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 18:56:24,926 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:56:24,926 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:24,932 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 18:56:24,947 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:56:24,958 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:56:24,962 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 30 msec 2023-06-08 18:56:24,975 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 18:56:24,984 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:56:24,988 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-06-08 18:56:24,999 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:56:25,000 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 18:56:25,000 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.625sec 2023-06-08 18:56:25,001 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 18:56:25,001 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 18:56:25,002 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 18:56:25,002 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46567,1686250583219-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 18:56:25,002 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46567,1686250583219-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 18:56:25,004 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 18:56:25,013 DEBUG [Listener at localhost.localdomain/35315] zookeeper.ReadOnlyZKClient(139): Connect 0x02099140 to 127.0.0.1:53036 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:56:25,018 DEBUG [Listener at localhost.localdomain/35315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@da1bce2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:56:25,020 DEBUG [hconnection-0x33f62b03-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:56:25,023 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38102, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:56:25,025 INFO [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:56:25,025 INFO [Listener at localhost.localdomain/35315] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:56:25,027 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 18:56:25,027 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:56:25,028 INFO [Listener at localhost.localdomain/35315] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 18:56:25,028 INFO [Listener at localhost.localdomain/35315] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-08 18:56:25,028 INFO [Listener at localhost.localdomain/35315] wal.TestLogRolling(432): Replication=2 2023-06-08 18:56:25,030 DEBUG [Listener at localhost.localdomain/35315] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 18:56:25,036 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:52382, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 18:56:25,037 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 18:56:25,038 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 18:56:25,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:56:25,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-08 18:56:25,042 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:56:25,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-08 18:56:25,043 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:56:25,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:56:25,045 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,045 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f empty. 2023-06-08 18:56:25,046 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,046 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-08 18:56:25,060 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-08 18:56:25,061 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1bb2430bf3171d76616c7e1910abdc0f, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/.tmp 2023-06-08 18:56:25,475 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:25,476 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 1bb2430bf3171d76616c7e1910abdc0f, disabling compactions & flushes 2023-06-08 18:56:25,476 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,476 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,476 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. after waiting 0 ms 2023-06-08 18:56:25,476 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,476 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,476 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 1bb2430bf3171d76616c7e1910abdc0f: 2023-06-08 18:56:25,480 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:56:25,481 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686250585480"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250585480"}]},"ts":"1686250585480"} 2023-06-08 18:56:25,483 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:56:25,485 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:56:25,485 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250585485"}]},"ts":"1686250585485"} 2023-06-08 18:56:25,487 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-08 18:56:25,490 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=1bb2430bf3171d76616c7e1910abdc0f, ASSIGN}] 2023-06-08 18:56:25,493 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=1bb2430bf3171d76616c7e1910abdc0f, ASSIGN 2023-06-08 18:56:25,495 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=1bb2430bf3171d76616c7e1910abdc0f, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,34297,1686250583326; forceNewPlan=false, retain=false 2023-06-08 18:56:25,646 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=1bb2430bf3171d76616c7e1910abdc0f, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:25,646 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686250585646"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250585646"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250585646"}]},"ts":"1686250585646"} 2023-06-08 18:56:25,649 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 1bb2430bf3171d76616c7e1910abdc0f, server=jenkins-hbase17.apache.org,34297,1686250583326}] 2023-06-08 18:56:25,813 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1bb2430bf3171d76616c7e1910abdc0f, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:56:25,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:56:25,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,815 INFO [StoreOpener-1bb2430bf3171d76616c7e1910abdc0f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,817 DEBUG [StoreOpener-1bb2430bf3171d76616c7e1910abdc0f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f/info 2023-06-08 18:56:25,817 DEBUG [StoreOpener-1bb2430bf3171d76616c7e1910abdc0f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f/info 2023-06-08 18:56:25,817 INFO [StoreOpener-1bb2430bf3171d76616c7e1910abdc0f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1bb2430bf3171d76616c7e1910abdc0f columnFamilyName info 2023-06-08 18:56:25,818 INFO [StoreOpener-1bb2430bf3171d76616c7e1910abdc0f-1] regionserver.HStore(310): Store=1bb2430bf3171d76616c7e1910abdc0f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:56:25,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:56:25,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/default/TestLogRolling-testLogRollOnPipelineRestart/1bb2430bf3171d76616c7e1910abdc0f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:56:25,824 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1bb2430bf3171d76616c7e1910abdc0f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=774351, jitterRate=-0.015362277626991272}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:56:25,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1bb2430bf3171d76616c7e1910abdc0f: 2023-06-08 18:56:25,825 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f., pid=11, masterSystemTime=1686250585809 2023-06-08 18:56:25,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:25,828 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=1bb2430bf3171d76616c7e1910abdc0f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:56:25,828 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686250585828"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250585828"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250585828"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250585828"}]},"ts":"1686250585828"} 2023-06-08 18:56:25,833 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 18:56:25,834 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 1bb2430bf3171d76616c7e1910abdc0f, server=jenkins-hbase17.apache.org,34297,1686250583326 in 182 msec 2023-06-08 18:56:25,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 18:56:25,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=1bb2430bf3171d76616c7e1910abdc0f, ASSIGN in 344 msec 2023-06-08 18:56:25,837 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:56:25,837 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250585837"}]},"ts":"1686250585837"} 2023-06-08 18:56:25,839 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-08 18:56:25,841 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:56:25,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 803 msec 2023-06-08 18:56:27,000 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 18:56:29,941 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-08 18:56:35,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:56:35,044 INFO [Listener at localhost.localdomain/35315] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-08 18:56:35,047 DEBUG [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-08 18:56:35,047 DEBUG [Listener at localhost.localdomain/35315] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:56:37,054 INFO [Listener at localhost.localdomain/35315] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 2023-06-08 18:56:37,055 WARN [Listener at localhost.localdomain/35315] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:37,056 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:37,056 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:37,056 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:37,057 WARN [DataStreamer for file /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta.1686250584320.meta block BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]) is bad. 2023-06-08 18:56:37,057 WARN [DataStreamer for file /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 block BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]) is bad. 2023-06-08 18:56:37,058 WARN [DataStreamer for file /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219/jenkins-hbase17.apache.org%2C46567%2C1686250583219.1686250583512 block BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46353,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]) is bad. 2023-06-08 18:56:37,071 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-378052627_17 at /127.0.0.1:37968 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37968 dst: /127.0.0.1:44269 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44269 remote=/127.0.0.1:37968]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,071 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:38008 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38008 dst: /127.0.0.1:44269 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44269 remote=/127.0.0.1:38008]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,071 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:38000 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38000 dst: /127.0.0.1:44269 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44269 remote=/127.0.0.1:38000]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,072 WARN [PacketResponder: BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44269]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,072 WARN [PacketResponder: BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44269]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,073 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:49318 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:46353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49318 dst: /127.0.0.1:46353 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,072 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:37,072 WARN [PacketResponder: BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44269]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,078 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:49308 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:46353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49308 dst: /127.0.0.1:46353 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,078 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-378052627_17 at /127.0.0.1:49282 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:46353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49282 dst: /127.0.0.1:46353 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,082 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:37,082 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1524306116-136.243.18.41-1686250582543 (Datanode Uuid dfebf782-3cb5-4925-b839-592680324388) service to localhost.localdomain/127.0.0.1:33233 2023-06-08 18:56:37,083 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data3/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:37,083 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:37,089 WARN [Listener at localhost.localdomain/35315] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:37,091 WARN [Listener at localhost.localdomain/35315] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:37,092 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:37,097 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_41197_datanode____.8k0u5j/webapp 2023-06-08 18:56:37,175 INFO [Listener at localhost.localdomain/35315] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41197 2023-06-08 18:56:37,184 WARN [Listener at localhost.localdomain/45007] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:37,194 WARN [Listener at localhost.localdomain/45007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:37,194 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:37,195 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:37,194 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:37,199 INFO [Listener at localhost.localdomain/45007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:37,252 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x814474130d22b12d: Processing first storage report for DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7 from datanode dfebf782-3cb5-4925-b839-592680324388 2023-06-08 18:56:37,253 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x814474130d22b12d: from storage DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7 node DatanodeRegistration(127.0.0.1:33455, datanodeUuid=dfebf782-3cb5-4925-b839-592680324388, infoPort=37285, infoSecurePort=0, ipcPort=45007, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:37,253 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x814474130d22b12d: Processing first storage report for DS-f752b46a-7b86-4035-b674-0f6cdd0b0cf2 from datanode dfebf782-3cb5-4925-b839-592680324388 2023-06-08 18:56:37,253 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x814474130d22b12d: from storage DS-f752b46a-7b86-4035-b674-0f6cdd0b0cf2 node DatanodeRegistration(127.0.0.1:33455, datanodeUuid=dfebf782-3cb5-4925-b839-592680324388, infoPort=37285, infoSecurePort=0, ipcPort=45007, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:56:37,302 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-378052627_17 at /127.0.0.1:53416 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53416 dst: /127.0.0.1:44269 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,305 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:53418 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53418 dst: /127.0.0.1:44269 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,305 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:53438 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44269:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53438 dst: /127.0.0.1:44269 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:37,308 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:37,308 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1524306116-136.243.18.41-1686250582543 (Datanode Uuid e692bda4-3fa0-48fb-a0cc-08b6e7f0f404) service to localhost.localdomain/127.0.0.1:33233 2023-06-08 18:56:37,309 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:37,309 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data2/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:37,319 WARN [Listener at localhost.localdomain/45007] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:37,323 WARN [Listener at localhost.localdomain/45007] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:37,324 INFO [Listener at localhost.localdomain/45007] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:37,331 INFO [Listener at localhost.localdomain/45007] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_45929_datanode____.754p6i/webapp 2023-06-08 18:56:37,415 INFO [Listener at localhost.localdomain/45007] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45929 2023-06-08 18:56:37,426 WARN [Listener at localhost.localdomain/38417] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:37,478 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d9de2e680f3e779: Processing first storage report for DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a from datanode e692bda4-3fa0-48fb-a0cc-08b6e7f0f404 2023-06-08 18:56:37,479 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d9de2e680f3e779: from storage DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a node DatanodeRegistration(127.0.0.1:43305, datanodeUuid=e692bda4-3fa0-48fb-a0cc-08b6e7f0f404, infoPort=38845, infoSecurePort=0, ipcPort=38417, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:37,479 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5d9de2e680f3e779: Processing first storage report for DS-6caf5336-8392-4292-b82e-e32680f388b2 from datanode e692bda4-3fa0-48fb-a0cc-08b6e7f0f404 2023-06-08 18:56:37,479 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5d9de2e680f3e779: from storage DS-6caf5336-8392-4292-b82e-e32680f388b2 node DatanodeRegistration(127.0.0.1:43305, datanodeUuid=e692bda4-3fa0-48fb-a0cc-08b6e7f0f404, infoPort=38845, infoSecurePort=0, ipcPort=38417, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:56:38,430 INFO [Listener at localhost.localdomain/38417] wal.TestLogRolling(481): Data Nodes restarted 2023-06-08 18:56:38,431 INFO [Listener at localhost.localdomain/38417] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-08 18:56:38,432 WARN [RS:0;jenkins-hbase17:34297.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:38,433 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C34297%2C1686250583326:(num 1686250584128) roll requested 2023-06-08 18:56:38,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34297] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:38,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34297] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:38102 deadline: 1686250608432, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-08 18:56:38,444 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 newFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 2023-06-08 18:56:38,445 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-08 18:56:38,445 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 2023-06-08 18:56:38,445 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33455,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:56:38,445 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:38,445 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 is not closed yet, will try archiving it next time 2023-06-08 18:56:38,445 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:50,477 INFO [Listener at localhost.localdomain/38417] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-08 18:56:52,481 WARN [Listener at localhost.localdomain/38417] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:52,484 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:43305,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 18:56:52,485 WARN [DataStreamer for file /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 block BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33455,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:43305,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]) is bad. 2023-06-08 18:56:52,485 WARN [PacketResponder: BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43305]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:52,486 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:32790 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:33455:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32790 dst: /127.0.0.1:33455 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:52,491 INFO [Listener at localhost.localdomain/38417] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:52,601 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:48488 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:43305:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48488 dst: /127.0.0.1:43305 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:52,605 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:52,605 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1524306116-136.243.18.41-1686250582543 (Datanode Uuid e692bda4-3fa0-48fb-a0cc-08b6e7f0f404) service to localhost.localdomain/127.0.0.1:33233 2023-06-08 18:56:52,606 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:52,606 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data2/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:52,612 WARN [Listener at localhost.localdomain/38417] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:52,615 WARN [Listener at localhost.localdomain/38417] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:52,616 INFO [Listener at localhost.localdomain/38417] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:52,622 INFO [Listener at localhost.localdomain/38417] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_41365_datanode____52c5q4/webapp 2023-06-08 18:56:52,694 INFO [Listener at localhost.localdomain/38417] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41365 2023-06-08 18:56:52,705 WARN [Listener at localhost.localdomain/35559] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:52,708 WARN [Listener at localhost.localdomain/35559] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:56:52,708 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:56:52,712 INFO [Listener at localhost.localdomain/35559] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:56:52,756 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9979a5f99cc7bd02: Processing first storage report for DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a from datanode e692bda4-3fa0-48fb-a0cc-08b6e7f0f404 2023-06-08 18:56:52,756 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9979a5f99cc7bd02: from storage DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a node DatanodeRegistration(127.0.0.1:34821, datanodeUuid=e692bda4-3fa0-48fb-a0cc-08b6e7f0f404, infoPort=39661, infoSecurePort=0, ipcPort=35559, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:56:52,756 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9979a5f99cc7bd02: Processing first storage report for DS-6caf5336-8392-4292-b82e-e32680f388b2 from datanode e692bda4-3fa0-48fb-a0cc-08b6e7f0f404 2023-06-08 18:56:52,756 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9979a5f99cc7bd02: from storage DS-6caf5336-8392-4292-b82e-e32680f388b2 node DatanodeRegistration(127.0.0.1:34821, datanodeUuid=e692bda4-3fa0-48fb-a0cc-08b6e7f0f404, infoPort=39661, infoSecurePort=0, ipcPort=35559, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:52,820 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1866913990_17 at /127.0.0.1:51160 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:33455:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51160 dst: /127.0.0.1:33455 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:56:52,866 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:56:52,866 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1524306116-136.243.18.41-1686250582543 (Datanode Uuid dfebf782-3cb5-4925-b839-592680324388) service to localhost.localdomain/127.0.0.1:33233 2023-06-08 18:56:52,867 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data3/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:52,867 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:56:52,874 WARN [Listener at localhost.localdomain/35559] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:56:52,877 WARN [Listener at localhost.localdomain/35559] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:56:52,879 INFO [Listener at localhost.localdomain/35559] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:56:52,885 INFO [Listener at localhost.localdomain/35559] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/java.io.tmpdir/Jetty_localhost_45221_datanode____.sbwi1/webapp 2023-06-08 18:56:52,957 INFO [Listener at localhost.localdomain/35559] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45221 2023-06-08 18:56:52,965 WARN [Listener at localhost.localdomain/37799] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:56:53,011 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd7fb9fdeee9e6375: Processing first storage report for DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7 from datanode dfebf782-3cb5-4925-b839-592680324388 2023-06-08 18:56:53,011 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd7fb9fdeee9e6375: from storage DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7 node DatanodeRegistration(127.0.0.1:41341, datanodeUuid=dfebf782-3cb5-4925-b839-592680324388, infoPort=38757, infoSecurePort=0, ipcPort=37799, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:53,011 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd7fb9fdeee9e6375: Processing first storage report for DS-f752b46a-7b86-4035-b674-0f6cdd0b0cf2 from datanode dfebf782-3cb5-4925-b839-592680324388 2023-06-08 18:56:53,012 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd7fb9fdeee9e6375: from storage DS-f752b46a-7b86-4035-b674-0f6cdd0b0cf2 node DatanodeRegistration(127.0.0.1:41341, datanodeUuid=dfebf782-3cb5-4925-b839-592680324388, infoPort=38757, infoSecurePort=0, ipcPort=37799, storageInfo=lv=-57;cid=testClusterID;nsid=1350837996;c=1686250582543), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:56:53,763 WARN [master/jenkins-hbase17:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,767 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C46567%2C1686250583219:(num 1686250583512) roll requested 2023-06-08 18:56:53,767 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,769 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,776 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-08 18:56:53,777 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219/jenkins-hbase17.apache.org%2C46567%2C1686250583219.1686250583512 with entries=88, filesize=43.81 KB; new WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219/jenkins-hbase17.apache.org%2C46567%2C1686250583219.1686250613768 2023-06-08 18:56:53,777 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34821,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK], DatanodeInfoWithStorage[127.0.0.1:41341,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]] 2023-06-08 18:56:53,777 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219/jenkins-hbase17.apache.org%2C46567%2C1686250583219.1686250583512 is not closed yet, will try archiving it next time 2023-06-08 18:56:53,777 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,777 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219/jenkins-hbase17.apache.org%2C46567%2C1686250583219.1686250583512; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,969 INFO [Listener at localhost.localdomain/37799] wal.TestLogRolling(498): Data Nodes restarted 2023-06-08 18:56:53,971 INFO [Listener at localhost.localdomain/37799] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-08 18:56:53,971 WARN [RS:0;jenkins-hbase17:34297.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33455,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,972 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C34297%2C1686250583326:(num 1686250598433) roll requested 2023-06-08 18:56:53,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34297] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33455,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34297] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:38102 deadline: 1686250623971, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-08 18:56:53,985 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 newFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 2023-06-08 18:56:53,985 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-08 18:56:53,985 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 2023-06-08 18:56:53,985 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41341,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:34821,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:56:53,985 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33455,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:56:53,986 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 is not closed yet, will try archiving it next time 2023-06-08 18:56:53,986 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33455,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:05,998 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 newFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 2023-06-08 18:57:06,000 INFO [Listener at localhost.localdomain/37799] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 2023-06-08 18:57:06,004 DEBUG [Listener at localhost.localdomain/37799] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41341,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:34821,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:57:06,004 DEBUG [Listener at localhost.localdomain/37799] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 is not closed yet, will try archiving it next time 2023-06-08 18:57:06,004 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 2023-06-08 18:57:06,005 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 2023-06-08 18:57:06,008 WARN [IPC Server handler 2 on default port 33233] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1016 2023-06-08 18:57:06,010 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 after 5ms 2023-06-08 18:57:06,778 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@5141e3ef] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1524306116-136.243.18.41-1686250582543:blk_1073741832_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:41341,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current/BP-1524306116-136.243.18.41-1686250582543/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current/BP-1524306116-136.243.18.41-1686250582543/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-08 18:57:10,011 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 after 4006ms 2023-06-08 18:57:10,012 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250584128 2023-06-08 18:57:10,024 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686250584872/Put/vlen=176/seqid=0] 2023-06-08 18:57:10,025 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #4: [default/info:d/1686250584941/Put/vlen=9/seqid=0] 2023-06-08 18:57:10,025 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #5: [hbase/info:d/1686250584981/Put/vlen=7/seqid=0] 2023-06-08 18:57:10,025 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686250585825/Put/vlen=232/seqid=0] 2023-06-08 18:57:10,025 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #4: [row1002/info:/1686250595052/Put/vlen=1045/seqid=0] 2023-06-08 18:57:10,026 DEBUG [Listener at localhost.localdomain/37799] wal.ProtobufLogReader(420): EOF at position 2162 2023-06-08 18:57:10,026 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 2023-06-08 18:57:10,026 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 2023-06-08 18:57:10,027 WARN [IPC Server handler 3 on default port 33233] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-08 18:57:10,028 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 after 2ms 2023-06-08 18:57:11,017 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@20e42609] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1524306116-136.243.18.41-1686250582543:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:34821,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current/BP-1524306116-136.243.18.41-1686250582543/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current/BP-1524306116-136.243.18.41-1686250582543/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-08 18:57:14,029 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 after 4003ms 2023-06-08 18:57:14,029 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250598433 2023-06-08 18:57:14,034 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #6: [row1003/info:/1686250608471/Put/vlen=1045/seqid=0] 2023-06-08 18:57:14,034 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #7: [row1004/info:/1686250610479/Put/vlen=1045/seqid=0] 2023-06-08 18:57:14,034 DEBUG [Listener at localhost.localdomain/37799] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-08 18:57:14,034 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 2023-06-08 18:57:14,035 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 2023-06-08 18:57:14,035 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 after 0ms 2023-06-08 18:57:14,035 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250613972 2023-06-08 18:57:14,040 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(522): #9: [row1005/info:/1686250623982/Put/vlen=1045/seqid=0] 2023-06-08 18:57:14,040 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 2023-06-08 18:57:14,040 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 2023-06-08 18:57:14,040 WARN [IPC Server handler 1 on default port 33233] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-08 18:57:14,041 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 after 1ms 2023-06-08 18:57:15,011 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-378052627_17 at /127.0.0.1:50548 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:41341:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50548 dst: /127.0.0.1:41341 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41341 remote=/127.0.0.1:50548]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:57:15,012 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-378052627_17 at /127.0.0.1:53618 [Receiving block BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:34821:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53618 dst: /127.0.0.1:34821 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:57:15,012 WARN [ResponseProcessor for block BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 18:57:15,013 WARN [DataStreamer for file /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 block BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41341,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:34821,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41341,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK]) is bad. 2023-06-08 18:57:15,018 WARN [DataStreamer for file /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 block BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,042 INFO [Listener at localhost.localdomain/37799] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 after 4002ms 2023-06-08 18:57:18,042 DEBUG [Listener at localhost.localdomain/37799] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 2023-06-08 18:57:18,050 DEBUG [Listener at localhost.localdomain/37799] wal.ProtobufLogReader(420): EOF at position 83 2023-06-08 18:57:18,051 INFO [Listener at localhost.localdomain/37799] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-06-08 18:57:18,051 WARN [RS_OPEN_META-regionserver/jenkins-hbase17:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,052 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta:.meta(num 1686250584320) roll requested 2023-06-08 18:57:18,052 DEBUG [Listener at localhost.localdomain/37799] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-08 18:57:18,052 INFO [Listener at localhost.localdomain/37799] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,053 INFO [Listener at localhost.localdomain/37799] regionserver.HRegion(2745): Flushing 88de32a5a86759b62951f20f99d10abb 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 18:57:18,055 WARN [RS:0;jenkins-hbase17:34297.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,056 DEBUG [Listener at localhost.localdomain/37799] regionserver.HRegion(2446): Flush status journal for 88de32a5a86759b62951f20f99d10abb: 2023-06-08 18:57:18,056 INFO [Listener at localhost.localdomain/37799] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,064 INFO [Listener at localhost.localdomain/37799] regionserver.HRegion(2745): Flushing 1bb2430bf3171d76616c7e1910abdc0f 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-08 18:57:18,064 DEBUG [Listener at localhost.localdomain/37799] regionserver.HRegion(2446): Flush status journal for 1bb2430bf3171d76616c7e1910abdc0f: 2023-06-08 18:57:18,064 INFO [Listener at localhost.localdomain/37799] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,070 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 18:57:18,070 INFO [Listener at localhost.localdomain/37799] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 18:57:18,071 DEBUG [Listener at localhost.localdomain/37799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02099140 to 127.0.0.1:53036 2023-06-08 18:57:18,071 DEBUG [Listener at localhost.localdomain/37799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:57:18,071 DEBUG [Listener at localhost.localdomain/37799] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 18:57:18,071 DEBUG [Listener at localhost.localdomain/37799] util.JVMClusterUtil(257): Found active master hash=853693737, stopped=false 2023-06-08 18:57:18,072 INFO [Listener at localhost.localdomain/37799] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:57:18,074 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:57:18,074 INFO [Listener at localhost.localdomain/37799] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 18:57:18,074 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:57:18,075 DEBUG [Listener at localhost.localdomain/37799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x04cca221 to 127.0.0.1:53036 2023-06-08 18:57:18,074 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:18,075 DEBUG [Listener at localhost.localdomain/37799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:57:18,076 INFO [Listener at localhost.localdomain/37799] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,34297,1686250583326' ***** 2023-06-08 18:57:18,076 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-06-08 18:57:18,076 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta.1686250584320.meta with entries=11, filesize=3.72 KB; new WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta.1686250638052.meta 2023-06-08 18:57:18,076 INFO [Listener at localhost.localdomain/37799] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:57:18,076 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41341,DS-8c66ac3d-a5ac-4ae1-b398-8dfb907d77a7,DISK], DatanodeInfoWithStorage[127.0.0.1:34821,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] 2023-06-08 18:57:18,076 INFO [RS:0;jenkins-hbase17:34297] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:57:18,076 INFO [RS:0;jenkins-hbase17:34297] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:57:18,077 INFO [RS:0;jenkins-hbase17:34297] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:57:18,077 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(3303): Received CLOSE for 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:57:18,076 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta.1686250584320.meta is not closed yet, will try archiving it next time 2023-06-08 18:57:18,077 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:57:18,077 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,076 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:57:18,077 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase17.apache.org%2C34297%2C1686250583326:(num 1686250625987) roll requested 2023-06-08 18:57:18,080 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:57:18,080 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(3303): Received CLOSE for 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:57:18,080 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:57:18,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 88de32a5a86759b62951f20f99d10abb, disabling compactions & flushes 2023-06-08 18:57:18,081 DEBUG [RS:0;jenkins-hbase17:34297] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x599a4762 to 127.0.0.1:53036 2023-06-08 18:57:18,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,081 DEBUG [RS:0;jenkins-hbase17:34297] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:57:18,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. after waiting 0 ms 2023-06-08 18:57:18,081 INFO [RS:0;jenkins-hbase17:34297] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:57:18,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,081 INFO [RS:0;jenkins-hbase17:34297] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:57:18,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 88de32a5a86759b62951f20f99d10abb 1/1 column families, dataSize=78 B heapSize=728 B 2023-06-08 18:57:18,081 INFO [RS:0;jenkins-hbase17:34297] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:57:18,081 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:57:18,081 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-08 18:57:18,083 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-08 18:57:18,083 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 88de32a5a86759b62951f20f99d10abb=hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb., 1bb2430bf3171d76616c7e1910abdc0f=TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f.} 2023-06-08 18:57:18,083 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:57:18,084 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1504): Waiting on 1588230740, 1bb2430bf3171d76616c7e1910abdc0f, 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:57:18,084 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:57:18,084 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:57:18,084 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:57:18,084 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:57:18,084 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.95 KB 2023-06-08 18:57:18,085 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.meta.1686250584320.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44269,DS-3dd8e063-d1b4-4b9a-81fc-1b6e53ab5e7a,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 88de32a5a86759b62951f20f99d10abb: 2023-06-08 18:57:18,086 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase17.apache.org,34297,1686250583326: Unrecoverable exception while closing hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,087 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-08 18:57:18,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-08 18:57:18,085 WARN [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-08 18:57:18,087 WARN [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-08 18:57:18,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-08 18:57:18,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-08 18:57:18,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-08 18:57:18,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1098383360, "init": 524288000, "max": 2051014656, "used": 349756048 }, "NonHeapMemoryUsage": { "committed": 139526144, "init": 2555904, "max": -1, "used": 136996904 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-08 18:57:18,089 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46567] master.MasterRpcServices(609): jenkins-hbase17.apache.org,34297,1686250583326 reported a fatal error: ***** ABORTING region server jenkins-hbase17.apache.org,34297,1686250583326: Unrecoverable exception while closing hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1bb2430bf3171d76616c7e1910abdc0f, disabling compactions & flushes 2023-06-08 18:57:18,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. after waiting 0 ms 2023-06-08 18:57:18,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1bb2430bf3171d76616c7e1910abdc0f: 2023-06-08 18:57:18,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,151 DEBUG [regionserver/jenkins-hbase17:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 newFile=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250638078 2023-06-08 18:57:18,152 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-08 18:57:18,152 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250638078 2023-06-08 18:57:18,152 WARN [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,152 ERROR [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987 failed. Cause="Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-08 18:57:18,153 ERROR [regionserver/jenkins-hbase17:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,153 ERROR [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326/jenkins-hbase17.apache.org%2C34297%2C1686250583326.1686250625987, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1524306116-136.243.18.41-1686250582543:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 18:57:18,161 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:57:18,168 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.72 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/.tmp/info/55f25d66bcbd44558b6bb115187b2bfa 2023-06-08 18:57:18,175 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/WALs/jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:57:18,177 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:57:18,177 DEBUG [regionserver/jenkins-hbase17:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-06-08 18:57:18,236 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=244 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/.tmp/table/4cbcbe96f57e4b95a49c84f88d0b9f52 2023-06-08 18:57:18,247 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/.tmp/info/55f25d66bcbd44558b6bb115187b2bfa as hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/info/55f25d66bcbd44558b6bb115187b2bfa 2023-06-08 18:57:18,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/info/55f25d66bcbd44558b6bb115187b2bfa, entries=20, sequenceid=16, filesize=7.4 K 2023-06-08 18:57:18,255 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/.tmp/table/4cbcbe96f57e4b95a49c84f88d0b9f52 as hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/table/4cbcbe96f57e4b95a49c84f88d0b9f52 2023-06-08 18:57:18,264 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/data/hbase/meta/1588230740/table/4cbcbe96f57e4b95a49c84f88d0b9f52, entries=4, sequenceid=16, filesize=4.8 K 2023-06-08 18:57:18,265 WARN [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2895): 1588230740 : failed writing ABORT_FLUSH marker to WAL java.io.IOException: Cannot append; log is closed, regionName = hbase:meta,,1.1588230740 at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2893) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2580) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-08 18:57:18,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Replay of WAL required. Forcing server shutdown 2023-06-08 18:57:18,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:57:18,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 18:57:18,284 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:57:18,284 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(3303): Received CLOSE for 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:57:18,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:57:18,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 88de32a5a86759b62951f20f99d10abb, disabling compactions & flushes 2023-06-08 18:57:18,284 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(3303): Received CLOSE for 1bb2430bf3171d76616c7e1910abdc0f 2023-06-08 18:57:18,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,284 DEBUG [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1504): Waiting on 1588230740, 1bb2430bf3171d76616c7e1910abdc0f, 88de32a5a86759b62951f20f99d10abb 2023-06-08 18:57:18,284 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:57:18,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,284 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. after waiting 0 ms 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 88de32a5a86759b62951f20f99d10abb: 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686250584415.88de32a5a86759b62951f20f99d10abb. 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1bb2430bf3171d76616c7e1910abdc0f, disabling compactions & flushes 2023-06-08 18:57:18,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. after waiting 0 ms 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1bb2430bf3171d76616c7e1910abdc0f: 2023-06-08 18:57:18,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686250585037.1bb2430bf3171d76616c7e1910abdc0f. 2023-06-08 18:57:18,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:57:18,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 18:57:18,484 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-08 18:57:18,485 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34297,1686250583326; all regions closed. 2023-06-08 18:57:18,485 DEBUG [RS:0;jenkins-hbase17:34297] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:57:18,485 INFO [RS:0;jenkins-hbase17:34297] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:57:18,485 INFO [RS:0;jenkins-hbase17:34297] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 18:57:18,485 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:57:18,486 INFO [RS:0;jenkins-hbase17:34297] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34297 2023-06-08 18:57:18,489 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:57:18,489 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34297,1686250583326 2023-06-08 18:57:18,489 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:57:18,489 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,34297,1686250583326] 2023-06-08 18:57:18,490 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,34297,1686250583326; numProcessing=1 2023-06-08 18:57:18,490 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,34297,1686250583326 already deleted, retry=false 2023-06-08 18:57:18,490 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,34297,1686250583326 expired; onlineServers=0 2023-06-08 18:57:18,490 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,46567,1686250583219' ***** 2023-06-08 18:57:18,490 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 18:57:18,491 DEBUG [M:0;jenkins-hbase17:46567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36ddc7ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:57:18,491 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:57:18,491 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,46567,1686250583219; all regions closed. 2023-06-08 18:57:18,491 DEBUG [M:0;jenkins-hbase17:46567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:57:18,491 DEBUG [M:0;jenkins-hbase17:46567] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 18:57:18,492 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 18:57:18,492 DEBUG [M:0;jenkins-hbase17:46567] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 18:57:18,492 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250583806] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250583806,5,FailOnTimeoutGroup] 2023-06-08 18:57:18,492 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250583808] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250583808,5,FailOnTimeoutGroup] 2023-06-08 18:57:18,492 INFO [M:0;jenkins-hbase17:46567] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 18:57:18,493 INFO [M:0;jenkins-hbase17:46567] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 18:57:18,493 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 18:57:18,493 INFO [M:0;jenkins-hbase17:46567] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-06-08 18:57:18,493 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:18,493 DEBUG [M:0;jenkins-hbase17:46567] master.HMaster(1512): Stopping service threads 2023-06-08 18:57:18,493 INFO [M:0;jenkins-hbase17:46567] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 18:57:18,494 ERROR [M:0;jenkins-hbase17:46567] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 18:57:18,494 INFO [M:0;jenkins-hbase17:46567] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 18:57:18,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:57:18,494 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 18:57:18,494 DEBUG [M:0;jenkins-hbase17:46567] zookeeper.ZKUtil(398): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 18:57:18,494 WARN [M:0;jenkins-hbase17:46567] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 18:57:18,494 INFO [M:0;jenkins-hbase17:46567] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 18:57:18,495 INFO [M:0;jenkins-hbase17:46567] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 18:57:18,495 DEBUG [M:0;jenkins-hbase17:46567] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:57:18,495 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:18,495 DEBUG [M:0;jenkins-hbase17:46567] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:18,495 DEBUG [M:0;jenkins-hbase17:46567] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:57:18,495 DEBUG [M:0;jenkins-hbase17:46567] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:18,495 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.18 KB heapSize=45.83 KB 2023-06-08 18:57:18,511 INFO [M:0;jenkins-hbase17:46567] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.18 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9295e9c6690d43a4a27af38a38316b91 2023-06-08 18:57:18,520 DEBUG [M:0;jenkins-hbase17:46567] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9295e9c6690d43a4a27af38a38316b91 as hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9295e9c6690d43a4a27af38a38316b91 2023-06-08 18:57:18,527 INFO [M:0;jenkins-hbase17:46567] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33233/user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9295e9c6690d43a4a27af38a38316b91, entries=11, sequenceid=92, filesize=7.0 K 2023-06-08 18:57:18,529 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegion(2948): Finished flush of dataSize ~38.18 KB/39101, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 34ms, sequenceid=92, compaction requested=false 2023-06-08 18:57:18,530 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:18,530 DEBUG [M:0;jenkins-hbase17:46567] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:57:18,531 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6bb3bf5c-6766-222c-6667-aa15bac7ab32/MasterData/WALs/jenkins-hbase17.apache.org,46567,1686250583219 2023-06-08 18:57:18,535 INFO [M:0;jenkins-hbase17:46567] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 18:57:18,535 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:57:18,536 INFO [M:0;jenkins-hbase17:46567] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:46567 2023-06-08 18:57:18,538 DEBUG [M:0;jenkins-hbase17:46567] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,46567,1686250583219 already deleted, retry=false 2023-06-08 18:57:18,590 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:57:18,590 INFO [RS:0;jenkins-hbase17:34297] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34297,1686250583326; zookeeper connection closed. 2023-06-08 18:57:18,590 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): regionserver:34297-0x100abcb6f930001, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:57:18,591 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5a2a4aa8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5a2a4aa8 2023-06-08 18:57:18,595 INFO [Listener at localhost.localdomain/37799] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 18:57:18,691 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:57:18,691 INFO [M:0;jenkins-hbase17:46567] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,46567,1686250583219; zookeeper connection closed. 2023-06-08 18:57:18,691 DEBUG [Listener at localhost.localdomain/35315-EventThread] zookeeper.ZKWatcher(600): master:46567-0x100abcb6f930000, quorum=127.0.0.1:53036, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:57:18,692 WARN [Listener at localhost.localdomain/37799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:57:18,698 INFO [Listener at localhost.localdomain/37799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:57:18,804 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:57:18,804 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1524306116-136.243.18.41-1686250582543 (Datanode Uuid dfebf782-3cb5-4925-b839-592680324388) service to localhost.localdomain/127.0.0.1:33233 2023-06-08 18:57:18,805 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data3/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:57:18,806 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data4/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:57:18,810 WARN [Listener at localhost.localdomain/37799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:57:18,818 INFO [Listener at localhost.localdomain/37799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:57:18,922 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:57:18,922 WARN [BP-1524306116-136.243.18.41-1686250582543 heartbeating to localhost.localdomain/127.0.0.1:33233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1524306116-136.243.18.41-1686250582543 (Datanode Uuid e692bda4-3fa0-48fb-a0cc-08b6e7f0f404) service to localhost.localdomain/127.0.0.1:33233 2023-06-08 18:57:18,923 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data1/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:57:18,924 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/cluster_dfac5ae9-826e-4745-790b-900cf043cb22/dfs/data/data2/current/BP-1524306116-136.243.18.41-1686250582543] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:57:18,938 INFO [Listener at localhost.localdomain/37799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 18:57:19,051 INFO [Listener at localhost.localdomain/37799] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 18:57:19,075 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 18:57:19,087 INFO [Listener at localhost.localdomain/37799] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 77) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:33233 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37799 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:33233 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1807695942) connection to localhost.localdomain/127.0.0.1:33233 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:33233 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:33233 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=476 (was 471) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=295 (was 294) - SystemLoadAverage LEAK? -, ProcessCount=184 (was 186), AvailableMemoryMB=1615 (was 2042) 2023-06-08 18:57:19,098 INFO [Listener at localhost.localdomain/37799] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=476, MaxFileDescriptor=60000, SystemLoadAverage=295, ProcessCount=184, AvailableMemoryMB=1614 2023-06-08 18:57:19,098 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 18:57:19,098 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/hadoop.log.dir so I do NOT create it in target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b 2023-06-08 18:57:19,098 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/974c5823-bf10-bede-31a4-02a2cf2b9927/hadoop.tmp.dir so I do NOT create it in target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b 2023-06-08 18:57:19,098 INFO [Listener at localhost.localdomain/37799] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344, deleteOnExit=true 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/test.cache.data in system properties and HBase conf 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/hadoop.log.dir in system properties and HBase conf 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 18:57:19,099 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 18:57:19,099 DEBUG [Listener at localhost.localdomain/37799] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:57:19,100 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:57:19,101 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 18:57:19,101 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/nfs.dump.dir in system properties and HBase conf 2023-06-08 18:57:19,101 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/java.io.tmpdir in system properties and HBase conf 2023-06-08 18:57:19,101 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:57:19,101 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 18:57:19,101 INFO [Listener at localhost.localdomain/37799] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 18:57:19,103 WARN [Listener at localhost.localdomain/37799] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:57:19,105 WARN [Listener at localhost.localdomain/37799] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:57:19,105 WARN [Listener at localhost.localdomain/37799] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:57:19,130 WARN [Listener at localhost.localdomain/37799] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:57:19,132 INFO [Listener at localhost.localdomain/37799] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:57:19,142 INFO [Listener at localhost.localdomain/37799] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/java.io.tmpdir/Jetty_localhost_localdomain_44263_hdfs____s57idl/webapp 2023-06-08 18:57:19,243 INFO [Listener at localhost.localdomain/37799] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:44263 2023-06-08 18:57:19,245 WARN [Listener at localhost.localdomain/37799] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:57:19,246 WARN [Listener at localhost.localdomain/37799] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:57:19,246 WARN [Listener at localhost.localdomain/37799] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:57:19,287 WARN [Listener at localhost.localdomain/40035] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:57:19,300 WARN [Listener at localhost.localdomain/40035] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:57:19,303 WARN [Listener at localhost.localdomain/40035] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:57:19,305 INFO [Listener at localhost.localdomain/40035] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:57:19,312 INFO [Listener at localhost.localdomain/40035] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/java.io.tmpdir/Jetty_localhost_39273_datanode____xlzc5r/webapp 2023-06-08 18:57:19,409 INFO [Listener at localhost.localdomain/40035] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39273 2023-06-08 18:57:19,416 WARN [Listener at localhost.localdomain/46615] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:57:19,431 WARN [Listener at localhost.localdomain/46615] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:57:19,433 WARN [Listener at localhost.localdomain/46615] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:57:19,435 INFO [Listener at localhost.localdomain/46615] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:57:19,439 INFO [Listener at localhost.localdomain/46615] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/java.io.tmpdir/Jetty_localhost_41429_datanode____.qzywjf/webapp 2023-06-08 18:57:19,505 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1141674ffc1df6db: Processing first storage report for DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a from datanode bfd90cac-e63b-42e6-941c-f14a5cb9408d 2023-06-08 18:57:19,505 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1141674ffc1df6db: from storage DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a node DatanodeRegistration(127.0.0.1:43889, datanodeUuid=bfd90cac-e63b-42e6-941c-f14a5cb9408d, infoPort=35545, infoSecurePort=0, ipcPort=46615, storageInfo=lv=-57;cid=testClusterID;nsid=1714106170;c=1686250639107), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:57:19,505 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1141674ffc1df6db: Processing first storage report for DS-46ce3009-21cc-4876-a637-c7f418cdf2a2 from datanode bfd90cac-e63b-42e6-941c-f14a5cb9408d 2023-06-08 18:57:19,505 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1141674ffc1df6db: from storage DS-46ce3009-21cc-4876-a637-c7f418cdf2a2 node DatanodeRegistration(127.0.0.1:43889, datanodeUuid=bfd90cac-e63b-42e6-941c-f14a5cb9408d, infoPort=35545, infoSecurePort=0, ipcPort=46615, storageInfo=lv=-57;cid=testClusterID;nsid=1714106170;c=1686250639107), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:57:19,546 INFO [Listener at localhost.localdomain/46615] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41429 2023-06-08 18:57:19,556 WARN [Listener at localhost.localdomain/42847] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:57:19,648 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8468c6d3c32a600: Processing first storage report for DS-258e5b99-0b54-41fa-997f-434012dd8ce2 from datanode 120aecb7-e4b4-4e07-b446-10535e5494bc 2023-06-08 18:57:19,648 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8468c6d3c32a600: from storage DS-258e5b99-0b54-41fa-997f-434012dd8ce2 node DatanodeRegistration(127.0.0.1:35855, datanodeUuid=120aecb7-e4b4-4e07-b446-10535e5494bc, infoPort=34445, infoSecurePort=0, ipcPort=42847, storageInfo=lv=-57;cid=testClusterID;nsid=1714106170;c=1686250639107), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:57:19,648 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8468c6d3c32a600: Processing first storage report for DS-6ba2e2cd-7f10-4450-a2d9-48f5434ea8ff from datanode 120aecb7-e4b4-4e07-b446-10535e5494bc 2023-06-08 18:57:19,649 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8468c6d3c32a600: from storage DS-6ba2e2cd-7f10-4450-a2d9-48f5434ea8ff node DatanodeRegistration(127.0.0.1:35855, datanodeUuid=120aecb7-e4b4-4e07-b446-10535e5494bc, infoPort=34445, infoSecurePort=0, ipcPort=42847, storageInfo=lv=-57;cid=testClusterID;nsid=1714106170;c=1686250639107), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:57:19,669 DEBUG [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b 2023-06-08 18:57:19,673 INFO [Listener at localhost.localdomain/42847] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/zookeeper_0, clientPort=54046, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 18:57:19,677 INFO [Listener at localhost.localdomain/42847] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54046 2023-06-08 18:57:19,677 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,679 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,699 INFO [Listener at localhost.localdomain/42847] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6 with version=8 2023-06-08 18:57:19,700 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase-staging 2023-06-08 18:57:19,702 INFO [Listener at localhost.localdomain/42847] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:57:19,702 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:57:19,702 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:57:19,702 INFO [Listener at localhost.localdomain/42847] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:57:19,702 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:57:19,703 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:57:19,703 INFO [Listener at localhost.localdomain/42847] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:57:19,704 INFO [Listener at localhost.localdomain/42847] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38245 2023-06-08 18:57:19,705 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,706 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,707 INFO [Listener at localhost.localdomain/42847] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38245 connecting to ZooKeeper ensemble=127.0.0.1:54046 2023-06-08 18:57:19,712 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:382450x0, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:57:19,718 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38245-0x100abcc4c560000 connected 2023-06-08 18:57:19,731 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:57:19,732 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:57:19,733 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:57:19,735 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38245 2023-06-08 18:57:19,736 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38245 2023-06-08 18:57:19,736 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38245 2023-06-08 18:57:19,737 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38245 2023-06-08 18:57:19,737 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38245 2023-06-08 18:57:19,737 INFO [Listener at localhost.localdomain/42847] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6, hbase.cluster.distributed=false 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:57:19,751 INFO [Listener at localhost.localdomain/42847] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:57:19,753 INFO [Listener at localhost.localdomain/42847] ipc.NettyRpcServer(120): Bind to /136.243.18.41:35651 2023-06-08 18:57:19,754 INFO [Listener at localhost.localdomain/42847] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:57:19,755 DEBUG [Listener at localhost.localdomain/42847] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:57:19,755 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,756 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,757 INFO [Listener at localhost.localdomain/42847] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35651 connecting to ZooKeeper ensemble=127.0.0.1:54046 2023-06-08 18:57:19,761 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:356510x0, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:57:19,762 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ZKUtil(164): regionserver:356510x0, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:57:19,765 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35651-0x100abcc4c560001 connected 2023-06-08 18:57:19,765 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:57:19,766 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:57:19,771 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35651 2023-06-08 18:57:19,771 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35651 2023-06-08 18:57:19,773 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35651 2023-06-08 18:57:19,773 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35651 2023-06-08 18:57:19,773 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35651 2023-06-08 18:57:19,781 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,783 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:57:19,783 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,784 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:57:19,784 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:57:19,784 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:19,785 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:57:19,789 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,38245,1686250639701 from backup master directory 2023-06-08 18:57:19,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:57:19,790 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,790 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:57:19,791 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:57:19,791 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,807 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/hbase.id with ID: 3e6d4724-4d50-44af-8f81-9e526237adc9 2023-06-08 18:57:19,822 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:19,824 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:19,832 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7e627707 to 127.0.0.1:54046 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:57:19,837 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4472ebf6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:57:19,837 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:57:19,838 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 18:57:19,840 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:57:19,842 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store-tmp 2023-06-08 18:57:19,872 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:19,872 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:57:19,872 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:19,872 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:19,872 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:57:19,872 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:19,872 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:57:19,872 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:57:19,874 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/WALs/jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,885 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38245%2C1686250639701, suffix=, logDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/WALs/jenkins-hbase17.apache.org,38245,1686250639701, archiveDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/oldWALs, maxLogs=10 2023-06-08 18:57:19,897 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/WALs/jenkins-hbase17.apache.org,38245,1686250639701/jenkins-hbase17.apache.org%2C38245%2C1686250639701.1686250639885 2023-06-08 18:57:19,897 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43889,DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a,DISK], DatanodeInfoWithStorage[127.0.0.1:35855,DS-258e5b99-0b54-41fa-997f-434012dd8ce2,DISK]] 2023-06-08 18:57:19,897 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:57:19,898 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:19,898 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:57:19,898 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:57:19,899 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:57:19,901 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 18:57:19,902 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 18:57:19,902 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:19,903 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:57:19,904 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:57:19,906 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:57:19,908 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:57:19,908 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=883234, jitterRate=0.12309107184410095}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:57:19,909 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:57:19,909 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 18:57:19,910 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 18:57:19,910 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 18:57:19,910 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 18:57:19,911 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 18:57:19,911 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 18:57:19,911 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 18:57:19,911 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 18:57:19,912 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 18:57:19,921 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 18:57:19,921 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 18:57:19,922 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 18:57:19,922 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 18:57:19,922 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 18:57:19,930 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:19,930 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 18:57:19,931 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 18:57:19,931 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 18:57:19,932 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:57:19,932 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:57:19,932 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:19,934 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,38245,1686250639701, sessionid=0x100abcc4c560000, setting cluster-up flag (Was=false) 2023-06-08 18:57:19,936 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:19,939 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 18:57:19,940 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,942 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:19,944 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 18:57:19,945 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:19,946 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.hbase-snapshot/.tmp 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:19,948 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:57:19,949 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686250669950 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 18:57:19,950 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:19,951 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 18:57:19,951 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 18:57:19,951 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:57:19,951 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 18:57:19,951 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 18:57:19,952 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 18:57:19,952 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 18:57:19,952 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250639952,5,FailOnTimeoutGroup] 2023-06-08 18:57:19,952 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250639952,5,FailOnTimeoutGroup] 2023-06-08 18:57:19,952 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:19,952 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 18:57:19,952 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:19,952 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:19,956 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:57:19,964 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:57:19,964 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:57:19,965 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6 2023-06-08 18:57:19,987 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:57:19,989 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(951): ClusterId : 3e6d4724-4d50-44af-8f81-9e526237adc9 2023-06-08 18:57:19,990 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:57:19,993 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:57:19,993 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:57:19,996 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:57:19,997 DEBUG [RS:0;jenkins-hbase17:35651] zookeeper.ReadOnlyZKClient(139): Connect 0x1b12ccbd to 127.0.0.1:54046 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:57:20,009 DEBUG [RS:0;jenkins-hbase17:35651] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15d521af, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:57:20,010 DEBUG [RS:0;jenkins-hbase17:35651] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11eaa7cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:57:20,017 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:35651 2023-06-08 18:57:20,018 INFO [RS:0;jenkins-hbase17:35651] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:57:20,018 INFO [RS:0;jenkins-hbase17:35651] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:57:20,018 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:57:20,018 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,38245,1686250639701 with isa=jenkins-hbase17.apache.org/136.243.18.41:35651, startcode=1686250639750 2023-06-08 18:57:20,019 DEBUG [RS:0;jenkins-hbase17:35651] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:57:20,036 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44431, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:57:20,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,039 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6 2023-06-08 18:57:20,039 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40035 2023-06-08 18:57:20,039 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:57:20,040 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:57:20,040 DEBUG [RS:0;jenkins-hbase17:35651] zookeeper.ZKUtil(162): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,041 WARN [RS:0;jenkins-hbase17:35651] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:57:20,041 INFO [RS:0;jenkins-hbase17:35651] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:57:20,041 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,042 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,35651,1686250639750] 2023-06-08 18:57:20,051 DEBUG [RS:0;jenkins-hbase17:35651] zookeeper.ZKUtil(162): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,052 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:57:20,052 INFO [RS:0;jenkins-hbase17:35651] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:57:20,054 INFO [RS:0;jenkins-hbase17:35651] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:57:20,056 INFO [RS:0;jenkins-hbase17:35651] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:57:20,056 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,060 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:57:20,062 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,062 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,063 DEBUG [RS:0;jenkins-hbase17:35651] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:57:20,065 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,065 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,065 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,078 INFO [RS:0;jenkins-hbase17:35651] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:57:20,078 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,35651,1686250639750-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,093 INFO [RS:0;jenkins-hbase17:35651] regionserver.Replication(203): jenkins-hbase17.apache.org,35651,1686250639750 started 2023-06-08 18:57:20,093 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,35651,1686250639750, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:35651, sessionid=0x100abcc4c560001 2023-06-08 18:57:20,093 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:57:20,093 DEBUG [RS:0;jenkins-hbase17:35651] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,093 DEBUG [RS:0;jenkins-hbase17:35651] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35651,1686250639750' 2023-06-08 18:57:20,093 DEBUG [RS:0;jenkins-hbase17:35651] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:57:20,094 DEBUG [RS:0;jenkins-hbase17:35651] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:20,096 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:57:20,096 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:57:20,096 DEBUG [RS:0;jenkins-hbase17:35651] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,096 DEBUG [RS:0;jenkins-hbase17:35651] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,35651,1686250639750' 2023-06-08 18:57:20,096 DEBUG [RS:0;jenkins-hbase17:35651] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:57:20,097 DEBUG [RS:0;jenkins-hbase17:35651] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:57:20,097 DEBUG [RS:0;jenkins-hbase17:35651] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:57:20,097 INFO [RS:0;jenkins-hbase17:35651] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:57:20,097 INFO [RS:0;jenkins-hbase17:35651] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:57:20,199 INFO [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35651%2C1686250639750, suffix=, logDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750, archiveDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/oldWALs, maxLogs=32 2023-06-08 18:57:20,208 INFO [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250640201 2023-06-08 18:57:20,208 DEBUG [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43889,DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a,DISK], DatanodeInfoWithStorage[127.0.0.1:35855,DS-258e5b99-0b54-41fa-997f-434012dd8ce2,DISK]] 2023-06-08 18:57:20,389 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:20,391 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:57:20,394 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/info 2023-06-08 18:57:20,394 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:57:20,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:20,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:57:20,398 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:57:20,398 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:57:20,399 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:20,399 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:57:20,401 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/table 2023-06-08 18:57:20,402 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:57:20,403 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:20,404 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740 2023-06-08 18:57:20,405 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740 2023-06-08 18:57:20,408 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:57:20,410 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:57:20,416 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:57:20,417 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=866242, jitterRate=0.10148376226425171}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:57:20,417 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:57:20,417 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:57:20,417 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:57:20,418 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:57:20,418 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:57:20,418 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:57:20,418 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:57:20,418 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:57:20,419 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:57:20,419 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 18:57:20,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 18:57:20,422 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 18:57:20,424 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 18:57:20,574 DEBUG [jenkins-hbase17:38245] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 18:57:20,575 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,35651,1686250639750, state=OPENING 2023-06-08 18:57:20,576 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 18:57:20,577 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:20,578 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:57:20,578 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,35651,1686250639750}] 2023-06-08 18:57:20,734 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:20,734 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:57:20,738 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34408, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:57:20,743 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 18:57:20,743 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:57:20,745 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C35651%2C1686250639750.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750, archiveDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/oldWALs, maxLogs=32 2023-06-08 18:57:20,765 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.meta.1686250640748.meta 2023-06-08 18:57:20,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43889,DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a,DISK], DatanodeInfoWithStorage[127.0.0.1:35855,DS-258e5b99-0b54-41fa-997f-434012dd8ce2,DISK]] 2023-06-08 18:57:20,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:57:20,765 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 18:57:20,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 18:57:20,766 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 18:57:20,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 18:57:20,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:20,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 18:57:20,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 18:57:20,768 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:57:20,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/info 2023-06-08 18:57:20,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/info 2023-06-08 18:57:20,770 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:57:20,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:20,772 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:57:20,773 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:57:20,773 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:57:20,773 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:57:20,775 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:20,775 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:57:20,776 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/table 2023-06-08 18:57:20,776 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/table 2023-06-08 18:57:20,777 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:57:20,778 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:20,779 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740 2023-06-08 18:57:20,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740 2023-06-08 18:57:20,782 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:57:20,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:57:20,785 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=837000, jitterRate=0.06430089473724365}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:57:20,785 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:57:20,788 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686250640734 2023-06-08 18:57:20,797 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,35651,1686250639750, state=OPEN 2023-06-08 18:57:20,798 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 18:57:20,799 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 18:57:20,799 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 18:57:20,799 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:57:20,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 18:57:20,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,35651,1686250639750 in 221 msec 2023-06-08 18:57:20,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 18:57:20,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 383 msec 2023-06-08 18:57:20,807 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 859 msec 2023-06-08 18:57:20,807 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686250640807, completionTime=-1 2023-06-08 18:57:20,807 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 18:57:20,807 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 18:57:20,810 DEBUG [hconnection-0x685d1966-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:57:20,812 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34410, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:57:20,814 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 18:57:20,814 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686250700814 2023-06-08 18:57:20,814 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686250760814 2023-06-08 18:57:20,814 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38245,1686250639701-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38245,1686250639701-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38245,1686250639701-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:38245, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 18:57:20,829 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:57:20,831 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 18:57:20,835 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 18:57:20,837 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:57:20,840 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:57:20,850 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/hbase/namespace/efe8d972490a683488eae09798e89b28 2023-06-08 18:57:20,851 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/hbase/namespace/efe8d972490a683488eae09798e89b28 empty. 2023-06-08 18:57:20,851 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/hbase/namespace/efe8d972490a683488eae09798e89b28 2023-06-08 18:57:20,852 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 18:57:20,921 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 18:57:20,928 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => efe8d972490a683488eae09798e89b28, NAME => 'hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp 2023-06-08 18:57:20,950 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:20,950 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing efe8d972490a683488eae09798e89b28, disabling compactions & flushes 2023-06-08 18:57:20,950 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:20,950 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:20,950 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. after waiting 0 ms 2023-06-08 18:57:20,950 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:20,950 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:20,950 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for efe8d972490a683488eae09798e89b28: 2023-06-08 18:57:20,956 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:57:20,957 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250640957"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250640957"}]},"ts":"1686250640957"} 2023-06-08 18:57:20,961 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:57:20,962 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:57:20,962 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250640962"}]},"ts":"1686250640962"} 2023-06-08 18:57:20,964 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 18:57:20,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=efe8d972490a683488eae09798e89b28, ASSIGN}] 2023-06-08 18:57:20,972 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=efe8d972490a683488eae09798e89b28, ASSIGN 2023-06-08 18:57:20,976 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=efe8d972490a683488eae09798e89b28, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,35651,1686250639750; forceNewPlan=false, retain=false 2023-06-08 18:57:21,127 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=efe8d972490a683488eae09798e89b28, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:21,127 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250641127"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250641127"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250641127"}]},"ts":"1686250641127"} 2023-06-08 18:57:21,130 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure efe8d972490a683488eae09798e89b28, server=jenkins-hbase17.apache.org,35651,1686250639750}] 2023-06-08 18:57:21,287 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:21,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => efe8d972490a683488eae09798e89b28, NAME => 'hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:57:21,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:21,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,289 INFO [StoreOpener-efe8d972490a683488eae09798e89b28-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,291 DEBUG [StoreOpener-efe8d972490a683488eae09798e89b28-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/info 2023-06-08 18:57:21,291 DEBUG [StoreOpener-efe8d972490a683488eae09798e89b28-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/info 2023-06-08 18:57:21,291 INFO [StoreOpener-efe8d972490a683488eae09798e89b28-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region efe8d972490a683488eae09798e89b28 columnFamilyName info 2023-06-08 18:57:21,292 INFO [StoreOpener-efe8d972490a683488eae09798e89b28-1] regionserver.HStore(310): Store=efe8d972490a683488eae09798e89b28/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:21,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,293 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for efe8d972490a683488eae09798e89b28 2023-06-08 18:57:21,299 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:57:21,300 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened efe8d972490a683488eae09798e89b28; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=751846, jitterRate=-0.043979302048683167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:57:21,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for efe8d972490a683488eae09798e89b28: 2023-06-08 18:57:21,302 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28., pid=6, masterSystemTime=1686250641282 2023-06-08 18:57:21,304 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:21,305 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:21,305 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=efe8d972490a683488eae09798e89b28, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:21,305 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250641305"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250641305"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250641305"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250641305"}]},"ts":"1686250641305"} 2023-06-08 18:57:21,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 18:57:21,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure efe8d972490a683488eae09798e89b28, server=jenkins-hbase17.apache.org,35651,1686250639750 in 177 msec 2023-06-08 18:57:21,312 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 18:57:21,312 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=efe8d972490a683488eae09798e89b28, ASSIGN in 341 msec 2023-06-08 18:57:21,313 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:57:21,313 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250641313"}]},"ts":"1686250641313"} 2023-06-08 18:57:21,315 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 18:57:21,317 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:57:21,319 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 488 msec 2023-06-08 18:57:21,332 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 18:57:21,333 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:57:21,333 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:21,337 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 18:57:21,347 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:57:21,351 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-06-08 18:57:21,360 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 18:57:21,369 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:57:21,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-08 18:57:21,384 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:57:21,386 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 18:57:21,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.595sec 2023-06-08 18:57:21,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 18:57:21,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 18:57:21,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 18:57:21,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38245,1686250639701-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 18:57:21,386 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38245,1686250639701-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 18:57:21,387 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 18:57:21,390 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ReadOnlyZKClient(139): Connect 0x6b45324b to 127.0.0.1:54046 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:57:21,393 DEBUG [Listener at localhost.localdomain/42847] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f1d0ea3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:57:21,395 DEBUG [hconnection-0x494f28a9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:57:21,397 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34416, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:57:21,398 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:57:21,398 INFO [Listener at localhost.localdomain/42847] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:57:21,400 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 18:57:21,400 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:57:21,401 INFO [Listener at localhost.localdomain/42847] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 18:57:21,403 DEBUG [Listener at localhost.localdomain/42847] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 18:57:21,407 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:39338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 18:57:21,409 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 18:57:21,409 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 18:57:21,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:57:21,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:21,414 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:57:21,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-08 18:57:21,415 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:57:21,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:57:21,416 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,417 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01 empty. 2023-06-08 18:57:21,417 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,417 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-08 18:57:21,427 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-08 18:57:21,428 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 63807ad67750a3f7815918af3a920e01, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/.tmp 2023-06-08 18:57:21,435 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:21,435 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 63807ad67750a3f7815918af3a920e01, disabling compactions & flushes 2023-06-08 18:57:21,435 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,435 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,435 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. after waiting 0 ms 2023-06-08 18:57:21,435 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,435 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,435 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:57:21,438 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:57:21,439 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686250641439"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250641439"}]},"ts":"1686250641439"} 2023-06-08 18:57:21,440 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:57:21,441 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:57:21,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250641441"}]},"ts":"1686250641441"} 2023-06-08 18:57:21,443 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-08 18:57:21,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=63807ad67750a3f7815918af3a920e01, ASSIGN}] 2023-06-08 18:57:21,451 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=63807ad67750a3f7815918af3a920e01, ASSIGN 2023-06-08 18:57:21,453 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=63807ad67750a3f7815918af3a920e01, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,35651,1686250639750; forceNewPlan=false, retain=false 2023-06-08 18:57:21,604 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=63807ad67750a3f7815918af3a920e01, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:21,604 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686250641604"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250641604"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250641604"}]},"ts":"1686250641604"} 2023-06-08 18:57:21,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 63807ad67750a3f7815918af3a920e01, server=jenkins-hbase17.apache.org,35651,1686250639750}] 2023-06-08 18:57:21,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63807ad67750a3f7815918af3a920e01, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:57:21,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:57:21,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,775 INFO [StoreOpener-63807ad67750a3f7815918af3a920e01-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,778 DEBUG [StoreOpener-63807ad67750a3f7815918af3a920e01-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info 2023-06-08 18:57:21,778 DEBUG [StoreOpener-63807ad67750a3f7815918af3a920e01-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info 2023-06-08 18:57:21,778 INFO [StoreOpener-63807ad67750a3f7815918af3a920e01-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63807ad67750a3f7815918af3a920e01 columnFamilyName info 2023-06-08 18:57:21,779 INFO [StoreOpener-63807ad67750a3f7815918af3a920e01-1] regionserver.HStore(310): Store=63807ad67750a3f7815918af3a920e01/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:57:21,781 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,781 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 63807ad67750a3f7815918af3a920e01 2023-06-08 18:57:21,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:57:21,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 63807ad67750a3f7815918af3a920e01; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=824947, jitterRate=0.04897478222846985}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:57:21,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:57:21,788 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01., pid=11, masterSystemTime=1686250641763 2023-06-08 18:57:21,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,790 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:21,791 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=63807ad67750a3f7815918af3a920e01, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:21,791 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686250641790"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250641790"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250641790"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250641790"}]},"ts":"1686250641790"} 2023-06-08 18:57:21,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 18:57:21,796 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 63807ad67750a3f7815918af3a920e01, server=jenkins-hbase17.apache.org,35651,1686250639750 in 185 msec 2023-06-08 18:57:21,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 18:57:21,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=63807ad67750a3f7815918af3a920e01, ASSIGN in 347 msec 2023-06-08 18:57:21,798 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:57:21,799 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250641798"}]},"ts":"1686250641798"} 2023-06-08 18:57:21,800 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-08 18:57:21,803 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:57:21,805 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 394 msec 2023-06-08 18:57:26,333 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 18:57:26,428 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:57:31,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:57:31,417 INFO [Listener at localhost.localdomain/42847] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-08 18:57:31,420 DEBUG [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:31,420 DEBUG [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:31,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-06-08 18:57:31,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-08 18:57:31,460 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-08 18:57:31,460 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:57:31,461 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-08 18:57:31,461 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-08 18:57:31,466 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,466 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 18:57:31,468 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:57:31,468 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,468 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:57:31,469 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:31,469 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 18:57:31,469 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,469 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 18:57:31,469 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,469 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 18:57:31,470 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 18:57:31,470 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-08 18:57:31,477 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-08 18:57:31,484 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-08 18:57:31,484 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:57:31,491 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-08 18:57:31,493 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 18:57:31,494 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 18:57:31,500 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:31,508 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. started... 2023-06-08 18:57:31,512 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing efe8d972490a683488eae09798e89b28 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 18:57:31,616 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/.tmp/info/3398f1cb89654b3d8a3d90feffe258ae 2023-06-08 18:57:31,668 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/.tmp/info/3398f1cb89654b3d8a3d90feffe258ae as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/info/3398f1cb89654b3d8a3d90feffe258ae 2023-06-08 18:57:31,693 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/info/3398f1cb89654b3d8a3d90feffe258ae, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 18:57:31,705 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for efe8d972490a683488eae09798e89b28 in 193ms, sequenceid=6, compaction requested=false 2023-06-08 18:57:31,707 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for efe8d972490a683488eae09798e89b28: 2023-06-08 18:57:31,707 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:57:31,708 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 18:57:31,708 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 18:57:31,708 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,708 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-08 18:57:31,708 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-08 18:57:31,712 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,712 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,712 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,712 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:31,712 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:31,713 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:31,713 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:31,713 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 18:57:31,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:31,714 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-08 18:57:31,715 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-08 18:57:31,715 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@375861d1[Count = 0] remaining members to acquire global barrier 2023-06-08 18:57:31,715 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,717 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,717 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 18:57:31,721 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] zookeeper.ZKUtil(162): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,721 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,721 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 18:57:31,721 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-08 18:57:31,721 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-08 18:57:31,721 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase17.apache.org,35651,1686250639750' in zk 2023-06-08 18:57:31,734 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-08 18:57:31,734 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,734 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:57:31,734 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,734 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:31,734 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:31,734 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-08 18:57:31,744 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:31,745 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:31,745 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 18:57:31,746 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,752 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:31,753 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 18:57:31,753 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,754 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase17.apache.org,35651,1686250639750': 2023-06-08 18:57:31,754 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-08 18:57:31,754 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 18:57:31,754 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-08 18:57:31,754 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 18:57:31,755 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-08 18:57:31,755 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 18:57:31,757 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,757 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,757 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,757 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,757 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:31,757 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:31,757 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:57:31,757 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:31,758 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:57:31,758 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:57:31,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 18:57:31,759 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,759 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:31,765 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 18:57:31,765 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,766 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,766 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:31,767 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 18:57:31,767 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,776 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,776 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 18:57:31,776 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 18:57:31,777 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:57:31,777 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:31,777 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 18:57:31,777 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:57:31,778 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:57:31,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:57:31,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-08 18:57:31,785 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/hbase:namespace because node does not exist (not an error) 2023-06-08 18:57:31,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 18:57:31,790 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-08 18:57:31,790 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 18:57:41,791 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 18:57:41,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 18:57:41,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-06-08 18:57:41,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,809 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:57:41,809 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:57:41,810 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 18:57:41,810 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 18:57:41,810 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,811 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,811 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,811 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:57:41,811 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:57:41,811 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:41,811 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,812 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 18:57:41,812 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,812 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,812 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 18:57:41,812 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,812 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,813 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,815 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 18:57:41,815 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:57:41,816 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 18:57:41,816 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 18:57:41,816 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 18:57:41,816 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:41,816 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. started... 2023-06-08 18:57:41,817 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 63807ad67750a3f7815918af3a920e01 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:57:41,831 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/c53e4ec5cd004554acb738c1ea0df362 2023-06-08 18:57:41,844 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/c53e4ec5cd004554acb738c1ea0df362 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c53e4ec5cd004554acb738c1ea0df362 2023-06-08 18:57:41,854 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c53e4ec5cd004554acb738c1ea0df362, entries=1, sequenceid=5, filesize=5.8 K 2023-06-08 18:57:41,855 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 63807ad67750a3f7815918af3a920e01 in 39ms, sequenceid=5, compaction requested=false 2023-06-08 18:57:41,856 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:57:41,856 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:41,856 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 18:57:41,856 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 18:57:41,856 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,856 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 18:57:41,856 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 18:57:41,858 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,858 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,858 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:41,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:41,859 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,859 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 18:57:41,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:41,859 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:41,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:41,861 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 18:57:41,861 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@3367f3e2[Count = 0] remaining members to acquire global barrier 2023-06-08 18:57:41,861 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 18:57:41,861 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,862 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,862 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,862 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 18:57:41,862 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 18:57:41,862 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,862 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 18:57:41,862 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,35651,1686250639750' in zk 2023-06-08 18:57:41,866 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 18:57:41,866 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,866 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:57:41,866 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 18:57:41,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:41,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:41,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:41,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:41,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:41,870 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,870 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,871 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,35651,1686250639750': 2023-06-08 18:57:41,871 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 18:57:41,871 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 18:57:41,871 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 18:57:41,871 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 18:57:41,871 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,871 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 18:57:41,874 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,875 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,875 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:41,875 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:41,876 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:41,877 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,877 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,878 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:41,878 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,879 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,879 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,879 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:57:41,880 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:41,880 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,881 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,887 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:57:41,891 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,891 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,891 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,891 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:41,891 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,891 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,892 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:41,892 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:57:41,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:57:41,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 18:57:41,892 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 18:57:41,895 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:57:41,895 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:57:41,896 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 18:57:41,896 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 18:57:41,900 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:57:41,900 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:51,896 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 18:57:51,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 18:57:51,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-06-08 18:57:51,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-08 18:57:51,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,911 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:57:51,911 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:57:51,912 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 18:57:51,912 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 18:57:51,912 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,912 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,913 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:57:51,913 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,914 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:57:51,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:51,914 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,914 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 18:57:51,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,914 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 18:57:51,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,915 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,915 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-08 18:57:51,915 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,915 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 18:57:51,915 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:57:51,916 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 18:57:51,916 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 18:57:51,916 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 18:57:51,916 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:51,916 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. started... 2023-06-08 18:57:51,916 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 63807ad67750a3f7815918af3a920e01 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:57:51,927 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/665ea43ee4624914965b37aae408c36b 2023-06-08 18:57:51,941 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/665ea43ee4624914965b37aae408c36b as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/665ea43ee4624914965b37aae408c36b 2023-06-08 18:57:51,948 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/665ea43ee4624914965b37aae408c36b, entries=1, sequenceid=9, filesize=5.8 K 2023-06-08 18:57:51,949 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 63807ad67750a3f7815918af3a920e01 in 33ms, sequenceid=9, compaction requested=false 2023-06-08 18:57:51,950 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:57:51,950 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:57:51,950 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 18:57:51,950 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 18:57:51,950 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,950 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 18:57:51,950 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 18:57:51,956 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,956 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,957 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,957 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:51,957 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:51,957 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,957 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 18:57:51,957 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:51,957 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:51,958 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,958 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,958 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:51,959 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 18:57:51,959 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 18:57:51,959 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@76656bea[Count = 0] remaining members to acquire global barrier 2023-06-08 18:57:51,959 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,960 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,960 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,960 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,960 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 18:57:51,960 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 18:57:51,960 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,35651,1686250639750' in zk 2023-06-08 18:57:51,960 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,960 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 18:57:51,961 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,961 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 18:57:51,961 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,962 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:51,962 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:51,962 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:57:51,962 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 18:57:51,962 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:51,963 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:51,963 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,963 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:51,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,965 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,35651,1686250639750': 2023-06-08 18:57:51,965 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 18:57:51,965 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 18:57:51,965 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 18:57:51,965 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 18:57:51,965 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,965 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 18:57:51,966 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,967 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,967 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:57:51,967 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:57:51,967 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,967 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,967 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:57:51,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,969 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,969 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,970 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:57:51,970 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,970 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,973 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,973 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,974 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:57:51,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:57:51,974 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:57:51,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 18:57:51,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 18:57:51,974 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:57:51,974 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 18:57:51,976 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:57:51,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:58:01,976 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 18:58:01,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 18:58:01,996 INFO [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250640201 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250681980 2023-06-08 18:58:01,996 DEBUG [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43889,DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a,DISK], DatanodeInfoWithStorage[127.0.0.1:35855,DS-258e5b99-0b54-41fa-997f-434012dd8ce2,DISK]] 2023-06-08 18:58:01,996 DEBUG [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250640201 is not closed yet, will try archiving it next time 2023-06-08 18:58:02,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-06-08 18:58:02,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-08 18:58:02,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,004 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:58:02,004 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:58:02,005 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 18:58:02,005 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 18:58:02,006 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,006 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,007 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,007 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:58:02,007 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:58:02,007 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:58:02,007 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,007 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 18:58:02,007 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,007 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 18:58:02,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,008 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,008 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-08 18:58:02,008 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,008 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 18:58:02,008 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:58:02,009 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 18:58:02,009 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 18:58:02,009 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 18:58:02,009 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:02,009 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. started... 2023-06-08 18:58:02,009 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 63807ad67750a3f7815918af3a920e01 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:58:02,024 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/74e0f5ad27ab485c803528e88a9fef38 2023-06-08 18:58:02,031 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/74e0f5ad27ab485c803528e88a9fef38 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/74e0f5ad27ab485c803528e88a9fef38 2023-06-08 18:58:02,038 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/74e0f5ad27ab485c803528e88a9fef38, entries=1, sequenceid=13, filesize=5.8 K 2023-06-08 18:58:02,039 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 63807ad67750a3f7815918af3a920e01 in 30ms, sequenceid=13, compaction requested=true 2023-06-08 18:58:02,039 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:58:02,039 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:02,039 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 18:58:02,039 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 18:58:02,039 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,039 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 18:58:02,039 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 18:58:02,041 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,041 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:58:02,041 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:58:02,041 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,041 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 18:58:02,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:58:02,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:58:02,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,042 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,043 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:58:02,043 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 18:58:02,043 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 18:58:02,043 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@33fb26bc[Count = 0] remaining members to acquire global barrier 2023-06-08 18:58:02,043 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,044 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,044 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,044 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,044 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,044 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 18:58:02,044 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 18:58:02,044 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 18:58:02,044 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,35651,1686250639750' in zk 2023-06-08 18:58:02,046 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 18:58:02,046 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,046 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:58:02,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:58:02,046 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:58:02,046 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 18:58:02,047 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:58:02,047 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:58:02,047 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,048 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,048 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:58:02,048 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,048 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,049 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,35651,1686250639750': 2023-06-08 18:58:02,049 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 18:58:02,049 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 18:58:02,049 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 18:58:02,049 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 18:58:02,049 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,049 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 18:58:02,050 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,050 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,050 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:58:02,050 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:58:02,050 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:58:02,050 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,050 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:58:02,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,054 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:58:02,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,055 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:58:02,055 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,057 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,059 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,059 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,059 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,059 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:58:02,059 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:58:02,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:02,060 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:58:02,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 18:58:02,060 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 18:58:02,060 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:58:02,061 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:58:02,061 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:58:12,060 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 18:58:12,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 18:58:12,062 DEBUG [Listener at localhost.localdomain/42847] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:58:12,068 DEBUG [Listener at localhost.localdomain/42847] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:58:12,069 DEBUG [Listener at localhost.localdomain/42847] regionserver.HStore(1912): 63807ad67750a3f7815918af3a920e01/info is initiating minor compaction (all files) 2023-06-08 18:58:12,069 INFO [Listener at localhost.localdomain/42847] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:58:12,069 INFO [Listener at localhost.localdomain/42847] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:12,070 INFO [Listener at localhost.localdomain/42847] regionserver.HRegion(2259): Starting compaction of 63807ad67750a3f7815918af3a920e01/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:12,070 INFO [Listener at localhost.localdomain/42847] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c53e4ec5cd004554acb738c1ea0df362, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/665ea43ee4624914965b37aae408c36b, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/74e0f5ad27ab485c803528e88a9fef38] into tmpdir=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp, totalSize=17.4 K 2023-06-08 18:58:12,071 DEBUG [Listener at localhost.localdomain/42847] compactions.Compactor(207): Compacting c53e4ec5cd004554acb738c1ea0df362, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1686250661801 2023-06-08 18:58:12,071 DEBUG [Listener at localhost.localdomain/42847] compactions.Compactor(207): Compacting 665ea43ee4624914965b37aae408c36b, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1686250671900 2023-06-08 18:58:12,072 DEBUG [Listener at localhost.localdomain/42847] compactions.Compactor(207): Compacting 74e0f5ad27ab485c803528e88a9fef38, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1686250681979 2023-06-08 18:58:12,091 INFO [Listener at localhost.localdomain/42847] throttle.PressureAwareThroughputController(145): 63807ad67750a3f7815918af3a920e01#info#compaction#21 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:58:12,514 DEBUG [Listener at localhost.localdomain/42847] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/65be8f01e54f476ba72c6f1a71e97e86 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/65be8f01e54f476ba72c6f1a71e97e86 2023-06-08 18:58:12,523 INFO [Listener at localhost.localdomain/42847] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 63807ad67750a3f7815918af3a920e01/info of 63807ad67750a3f7815918af3a920e01 into 65be8f01e54f476ba72c6f1a71e97e86(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:58:12,523 DEBUG [Listener at localhost.localdomain/42847] regionserver.HRegion(2289): Compaction status journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:58:12,536 INFO [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250681980 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250692525 2023-06-08 18:58:12,536 DEBUG [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35855,DS-258e5b99-0b54-41fa-997f-434012dd8ce2,DISK], DatanodeInfoWithStorage[127.0.0.1:43889,DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a,DISK]] 2023-06-08 18:58:12,536 DEBUG [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250681980 is not closed yet, will try archiving it next time 2023-06-08 18:58:12,536 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250640201 to hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/oldWALs/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250640201 2023-06-08 18:58:12,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(933): Client=jenkins//136.243.18.41 procedure request for: flush-table-proc 2023-06-08 18:58:12,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-08 18:58:12,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,544 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:58:12,544 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:58:12,545 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 18:58:12,545 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 18:58:12,545 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,545 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,548 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,548 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:58:12,548 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:58:12,548 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:58:12,548 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,548 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 18:58:12,548 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,549 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,549 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 18:58:12,549 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,549 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,549 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-08 18:58:12,549 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,549 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 18:58:12,549 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 18:58:12,550 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 18:58:12,550 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 18:58:12,550 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 18:58:12,550 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:12,550 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. started... 2023-06-08 18:58:12,550 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 63807ad67750a3f7815918af3a920e01 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:58:12,562 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/6d93e4b02e3f4088a3d56f7ede67ae84 2023-06-08 18:58:12,570 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/6d93e4b02e3f4088a3d56f7ede67ae84 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/6d93e4b02e3f4088a3d56f7ede67ae84 2023-06-08 18:58:12,577 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/6d93e4b02e3f4088a3d56f7ede67ae84, entries=1, sequenceid=18, filesize=5.8 K 2023-06-08 18:58:12,578 INFO [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 63807ad67750a3f7815918af3a920e01 in 28ms, sequenceid=18, compaction requested=false 2023-06-08 18:58:12,579 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:58:12,579 DEBUG [rs(jenkins-hbase17.apache.org,35651,1686250639750)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:12,579 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 18:58:12,579 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 18:58:12,579 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,579 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 18:58:12,579 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 18:58:12,581 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,581 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,581 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,581 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:58:12,581 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:58:12,581 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,581 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 18:58:12,581 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:58:12,581 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:58:12,582 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,582 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,582 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:58:12,582 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase17.apache.org,35651,1686250639750' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 18:58:12,582 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@47d99ffb[Count = 0] remaining members to acquire global barrier 2023-06-08 18:58:12,582 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 18:58:12,583 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,583 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,583 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,584 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,584 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 18:58:12,584 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 18:58:12,584 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,584 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 18:58:12,584 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase17.apache.org,35651,1686250639750' in zk 2023-06-08 18:58:12,586 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 18:58:12,586 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:58:12,586 DEBUG [member: 'jenkins-hbase17.apache.org,35651,1686250639750' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 18:58:12,591 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,594 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,594 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:58:12,594 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:58:12,595 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:58:12,595 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:58:12,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,599 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:58:12,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,604 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,604 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase17.apache.org,35651,1686250639750': 2023-06-08 18:58:12,605 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase17.apache.org,35651,1686250639750' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 18:58:12,605 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 18:58:12,605 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 18:58:12,605 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 18:58:12,605 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,605 INFO [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 18:58:12,606 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,606 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 18:58:12,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 18:58:12,606 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,606 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:58:12,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,607 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 18:58:12,607 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:58:12,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:58:12,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 18:58:12,608 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,609 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,609 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,609 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 18:58:12,609 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,610 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,612 DEBUG [(jenkins-hbase17.apache.org,38245,1686250639701)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 18:58:12,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 18:58:12,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 18:58:12,613 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 18:58:12,613 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 18:58:12,616 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 18:58:12,616 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:12,616 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 18:58:12,616 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,617 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,617 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 18:58:12,617 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 18:58:12,617 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:58:22,613 DEBUG [Listener at localhost.localdomain/42847] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 18:58:22,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38245] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 18:58:22,631 INFO [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250692525 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250702619 2023-06-08 18:58:22,631 DEBUG [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43889,DS-9a89c9ad-60bc-4dc6-8b07-420baa35210a,DISK], DatanodeInfoWithStorage[127.0.0.1:35855,DS-258e5b99-0b54-41fa-997f-434012dd8ce2,DISK]] 2023-06-08 18:58:22,631 DEBUG [Listener at localhost.localdomain/42847] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250692525 is not closed yet, will try archiving it next time 2023-06-08 18:58:22,631 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250681980 to hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/oldWALs/jenkins-hbase17.apache.org%2C35651%2C1686250639750.1686250681980 2023-06-08 18:58:22,631 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 18:58:22,631 INFO [Listener at localhost.localdomain/42847] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 18:58:22,631 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b45324b to 127.0.0.1:54046 2023-06-08 18:58:22,633 DEBUG [Listener at localhost.localdomain/42847] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:58:22,634 DEBUG [Listener at localhost.localdomain/42847] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 18:58:22,634 DEBUG [Listener at localhost.localdomain/42847] util.JVMClusterUtil(257): Found active master hash=855812681, stopped=false 2023-06-08 18:58:22,634 INFO [Listener at localhost.localdomain/42847] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:58:22,636 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:58:22,636 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:58:22,636 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:22,636 INFO [Listener at localhost.localdomain/42847] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 18:58:22,636 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:58:22,637 DEBUG [Listener at localhost.localdomain/42847] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7e627707 to 127.0.0.1:54046 2023-06-08 18:58:22,637 DEBUG [Listener at localhost.localdomain/42847] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:58:22,637 INFO [Listener at localhost.localdomain/42847] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,35651,1686250639750' ***** 2023-06-08 18:58:22,637 INFO [Listener at localhost.localdomain/42847] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:58:22,637 INFO [RS:0;jenkins-hbase17:35651] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:58:22,637 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:58:22,638 INFO [RS:0;jenkins-hbase17:35651] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:58:22,638 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:58:22,638 INFO [RS:0;jenkins-hbase17:35651] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:58:22,638 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(3303): Received CLOSE for efe8d972490a683488eae09798e89b28 2023-06-08 18:58:22,638 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(3303): Received CLOSE for 63807ad67750a3f7815918af3a920e01 2023-06-08 18:58:22,638 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:22,639 DEBUG [RS:0;jenkins-hbase17:35651] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b12ccbd to 127.0.0.1:54046 2023-06-08 18:58:22,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing efe8d972490a683488eae09798e89b28, disabling compactions & flushes 2023-06-08 18:58:22,639 DEBUG [RS:0;jenkins-hbase17:35651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:58:22,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:58:22,639 INFO [RS:0;jenkins-hbase17:35651] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:58:22,639 INFO [RS:0;jenkins-hbase17:35651] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:58:22,639 INFO [RS:0;jenkins-hbase17:35651] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:58:22,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:58:22,639 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:58:22,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. after waiting 0 ms 2023-06-08 18:58:22,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:58:22,639 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-08 18:58:22,639 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1478): Online Regions={efe8d972490a683488eae09798e89b28=hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28., 63807ad67750a3f7815918af3a920e01=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01., 1588230740=hbase:meta,,1.1588230740} 2023-06-08 18:58:22,639 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:58:22,639 DEBUG [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1504): Waiting on 1588230740, 63807ad67750a3f7815918af3a920e01, efe8d972490a683488eae09798e89b28 2023-06-08 18:58:22,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:58:22,640 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:58:22,640 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:58:22,640 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:58:22,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-08 18:58:22,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/namespace/efe8d972490a683488eae09798e89b28/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 18:58:22,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:58:22,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for efe8d972490a683488eae09798e89b28: 2023-06-08 18:58:22,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686250640829.efe8d972490a683488eae09798e89b28. 2023-06-08 18:58:22,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 63807ad67750a3f7815918af3a920e01, disabling compactions & flushes 2023-06-08 18:58:22,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:22,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:22,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. after waiting 0 ms 2023-06-08 18:58:22,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:22,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 63807ad67750a3f7815918af3a920e01 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:58:22,655 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/.tmp/info/70919c349e8e4e328f301602cf027ca5 2023-06-08 18:58:22,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/c8403dd5db9b4e8794b2ced3ad733e2d 2023-06-08 18:58:22,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/.tmp/info/c8403dd5db9b4e8794b2ced3ad733e2d as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c8403dd5db9b4e8794b2ced3ad733e2d 2023-06-08 18:58:22,681 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c8403dd5db9b4e8794b2ced3ad733e2d, entries=1, sequenceid=22, filesize=5.8 K 2023-06-08 18:58:22,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 63807ad67750a3f7815918af3a920e01 in 30ms, sequenceid=22, compaction requested=true 2023-06-08 18:58:22,683 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/.tmp/table/1d980334958d464fac9de5bf34877b65 2023-06-08 18:58:22,686 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c53e4ec5cd004554acb738c1ea0df362, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/665ea43ee4624914965b37aae408c36b, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/74e0f5ad27ab485c803528e88a9fef38] to archive 2023-06-08 18:58:22,687 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 18:58:22,689 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c53e4ec5cd004554acb738c1ea0df362 to hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/c53e4ec5cd004554acb738c1ea0df362 2023-06-08 18:58:22,691 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/665ea43ee4624914965b37aae408c36b to hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/665ea43ee4624914965b37aae408c36b 2023-06-08 18:58:22,692 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/74e0f5ad27ab485c803528e88a9fef38 to hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/info/74e0f5ad27ab485c803528e88a9fef38 2023-06-08 18:58:22,693 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/.tmp/info/70919c349e8e4e328f301602cf027ca5 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/info/70919c349e8e4e328f301602cf027ca5 2023-06-08 18:58:22,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/63807ad67750a3f7815918af3a920e01/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-08 18:58:22,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:22,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 63807ad67750a3f7815918af3a920e01: 2023-06-08 18:58:22,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686250641409.63807ad67750a3f7815918af3a920e01. 2023-06-08 18:58:22,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/info/70919c349e8e4e328f301602cf027ca5, entries=20, sequenceid=14, filesize=7.6 K 2023-06-08 18:58:22,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/.tmp/table/1d980334958d464fac9de5bf34877b65 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/table/1d980334958d464fac9de5bf34877b65 2023-06-08 18:58:22,708 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/table/1d980334958d464fac9de5bf34877b65, entries=4, sequenceid=14, filesize=4.9 K 2023-06-08 18:58:22,709 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 69ms, sequenceid=14, compaction requested=false 2023-06-08 18:58:22,715 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-08 18:58:22,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 18:58:22,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:58:22,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:58:22,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 18:58:22,840 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,35651,1686250639750; all regions closed. 2023-06-08 18:58:22,840 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:22,854 DEBUG [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/oldWALs 2023-06-08 18:58:22,854 INFO [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C35651%2C1686250639750.meta:.meta(num 1686250640748) 2023-06-08 18:58:22,855 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/WALs/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:22,861 DEBUG [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/oldWALs 2023-06-08 18:58:22,861 INFO [RS:0;jenkins-hbase17:35651] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C35651%2C1686250639750:(num 1686250702619) 2023-06-08 18:58:22,861 DEBUG [RS:0;jenkins-hbase17:35651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:58:22,861 INFO [RS:0;jenkins-hbase17:35651] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:58:22,862 INFO [RS:0;jenkins-hbase17:35651] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 18:58:22,862 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:58:22,863 INFO [RS:0;jenkins-hbase17:35651] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:35651 2023-06-08 18:58:22,867 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,35651,1686250639750 2023-06-08 18:58:22,867 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:58:22,867 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:58:22,868 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,35651,1686250639750] 2023-06-08 18:58:22,868 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,35651,1686250639750; numProcessing=1 2023-06-08 18:58:22,869 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,35651,1686250639750 already deleted, retry=false 2023-06-08 18:58:22,869 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,35651,1686250639750 expired; onlineServers=0 2023-06-08 18:58:22,870 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,38245,1686250639701' ***** 2023-06-08 18:58:22,870 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 18:58:22,870 DEBUG [M:0;jenkins-hbase17:38245] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@373d28f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:58:22,870 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:58:22,870 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38245,1686250639701; all regions closed. 2023-06-08 18:58:22,870 DEBUG [M:0;jenkins-hbase17:38245] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:58:22,870 DEBUG [M:0;jenkins-hbase17:38245] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 18:58:22,871 DEBUG [M:0;jenkins-hbase17:38245] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 18:58:22,871 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250639952] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250639952,5,FailOnTimeoutGroup] 2023-06-08 18:58:22,871 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250639952] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250639952,5,FailOnTimeoutGroup] 2023-06-08 18:58:22,871 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 18:58:22,871 INFO [M:0;jenkins-hbase17:38245] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 18:58:22,872 INFO [M:0;jenkins-hbase17:38245] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 18:58:22,872 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 18:58:22,872 INFO [M:0;jenkins-hbase17:38245] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-06-08 18:58:22,872 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:22,872 DEBUG [M:0;jenkins-hbase17:38245] master.HMaster(1512): Stopping service threads 2023-06-08 18:58:22,872 INFO [M:0;jenkins-hbase17:38245] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 18:58:22,873 ERROR [M:0;jenkins-hbase17:38245] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 18:58:22,873 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:58:22,873 INFO [M:0;jenkins-hbase17:38245] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 18:58:22,873 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 18:58:22,873 DEBUG [M:0;jenkins-hbase17:38245] zookeeper.ZKUtil(398): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 18:58:22,874 WARN [M:0;jenkins-hbase17:38245] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 18:58:22,874 INFO [M:0;jenkins-hbase17:38245] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 18:58:22,874 INFO [M:0;jenkins-hbase17:38245] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 18:58:22,874 DEBUG [M:0;jenkins-hbase17:38245] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:58:22,874 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:22,874 DEBUG [M:0;jenkins-hbase17:38245] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:22,874 DEBUG [M:0;jenkins-hbase17:38245] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:58:22,875 DEBUG [M:0;jenkins-hbase17:38245] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:22,875 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.92 KB heapSize=47.38 KB 2023-06-08 18:58:22,888 INFO [M:0;jenkins-hbase17:38245] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.92 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5655f5f7d68b4351a2a8b7a219e38a51 2023-06-08 18:58:22,892 INFO [M:0;jenkins-hbase17:38245] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5655f5f7d68b4351a2a8b7a219e38a51 2023-06-08 18:58:22,893 DEBUG [M:0;jenkins-hbase17:38245] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5655f5f7d68b4351a2a8b7a219e38a51 as hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5655f5f7d68b4351a2a8b7a219e38a51 2023-06-08 18:58:22,899 INFO [M:0;jenkins-hbase17:38245] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5655f5f7d68b4351a2a8b7a219e38a51 2023-06-08 18:58:22,900 INFO [M:0;jenkins-hbase17:38245] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40035/user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5655f5f7d68b4351a2a8b7a219e38a51, entries=11, sequenceid=100, filesize=6.1 K 2023-06-08 18:58:22,901 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegion(2948): Finished flush of dataSize ~38.92 KB/39854, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=100, compaction requested=false 2023-06-08 18:58:22,902 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:22,902 DEBUG [M:0;jenkins-hbase17:38245] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:58:22,902 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/300368b6-5d22-c914-497b-7ede52d7cbb6/MasterData/WALs/jenkins-hbase17.apache.org,38245,1686250639701 2023-06-08 18:58:22,905 INFO [M:0;jenkins-hbase17:38245] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 18:58:22,905 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:58:22,906 INFO [M:0;jenkins-hbase17:38245] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38245 2023-06-08 18:58:22,907 DEBUG [M:0;jenkins-hbase17:38245] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,38245,1686250639701 already deleted, retry=false 2023-06-08 18:58:22,969 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:58:22,969 INFO [RS:0;jenkins-hbase17:35651] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,35651,1686250639750; zookeeper connection closed. 2023-06-08 18:58:22,969 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): regionserver:35651-0x100abcc4c560001, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:58:22,970 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@432e168b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@432e168b 2023-06-08 18:58:22,970 INFO [Listener at localhost.localdomain/42847] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 18:58:23,069 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:58:23,069 DEBUG [Listener at localhost.localdomain/42847-EventThread] zookeeper.ZKWatcher(600): master:38245-0x100abcc4c560000, quorum=127.0.0.1:54046, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:58:23,069 INFO [M:0;jenkins-hbase17:38245] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38245,1686250639701; zookeeper connection closed. 2023-06-08 18:58:23,070 WARN [Listener at localhost.localdomain/42847] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:58:23,077 INFO [Listener at localhost.localdomain/42847] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:58:23,186 WARN [BP-364929161-136.243.18.41-1686250639107 heartbeating to localhost.localdomain/127.0.0.1:40035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:58:23,186 WARN [BP-364929161-136.243.18.41-1686250639107 heartbeating to localhost.localdomain/127.0.0.1:40035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-364929161-136.243.18.41-1686250639107 (Datanode Uuid 120aecb7-e4b4-4e07-b446-10535e5494bc) service to localhost.localdomain/127.0.0.1:40035 2023-06-08 18:58:23,187 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/dfs/data/data3/current/BP-364929161-136.243.18.41-1686250639107] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:58:23,188 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/dfs/data/data4/current/BP-364929161-136.243.18.41-1686250639107] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:58:23,190 WARN [Listener at localhost.localdomain/42847] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:58:23,194 INFO [Listener at localhost.localdomain/42847] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:58:23,301 WARN [BP-364929161-136.243.18.41-1686250639107 heartbeating to localhost.localdomain/127.0.0.1:40035] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:58:23,301 WARN [BP-364929161-136.243.18.41-1686250639107 heartbeating to localhost.localdomain/127.0.0.1:40035] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-364929161-136.243.18.41-1686250639107 (Datanode Uuid bfd90cac-e63b-42e6-941c-f14a5cb9408d) service to localhost.localdomain/127.0.0.1:40035 2023-06-08 18:58:23,302 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/dfs/data/data1/current/BP-364929161-136.243.18.41-1686250639107] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:58:23,302 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/cluster_8194326c-3876-ab6a-789e-6ee4ac35b344/dfs/data/data2/current/BP-364929161-136.243.18.41-1686250639107] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:58:23,318 INFO [Listener at localhost.localdomain/42847] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 18:58:23,434 INFO [Listener at localhost.localdomain/42847] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 18:58:23,452 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 18:58:23,463 INFO [Listener at localhost.localdomain/42847] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=96 (was 88) - Thread LEAK? -, OpenFileDescriptor=495 (was 476) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=281 (was 295), ProcessCount=184 (was 184), AvailableMemoryMB=1181 (was 1614) 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=97, OpenFileDescriptor=495, MaxFileDescriptor=60000, SystemLoadAverage=281, ProcessCount=184, AvailableMemoryMB=1180 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/hadoop.log.dir so I do NOT create it in target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0f15683f-4228-b653-c7ba-fa033baf883b/hadoop.tmp.dir so I do NOT create it in target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad, deleteOnExit=true 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 18:58:23,474 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/test.cache.data in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/hadoop.log.dir in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 18:58:23,475 DEBUG [Listener at localhost.localdomain/42847] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 18:58:23,475 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/nfs.dump.dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/java.io.tmpdir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 18:58:23,476 INFO [Listener at localhost.localdomain/42847] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 18:58:23,478 WARN [Listener at localhost.localdomain/42847] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:58:23,479 WARN [Listener at localhost.localdomain/42847] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:58:23,479 WARN [Listener at localhost.localdomain/42847] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:58:23,501 WARN [Listener at localhost.localdomain/42847] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:58:23,503 INFO [Listener at localhost.localdomain/42847] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:58:23,508 INFO [Listener at localhost.localdomain/42847] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/java.io.tmpdir/Jetty_localhost_localdomain_35453_hdfs____.l1yzue/webapp 2023-06-08 18:58:23,580 INFO [Listener at localhost.localdomain/42847] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35453 2023-06-08 18:58:23,582 WARN [Listener at localhost.localdomain/42847] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:58:23,583 WARN [Listener at localhost.localdomain/42847] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:58:23,583 WARN [Listener at localhost.localdomain/42847] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:58:23,605 WARN [Listener at localhost.localdomain/36619] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:58:23,613 WARN [Listener at localhost.localdomain/36619] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:58:23,616 WARN [Listener at localhost.localdomain/36619] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:58:23,617 INFO [Listener at localhost.localdomain/36619] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:58:23,623 INFO [Listener at localhost.localdomain/36619] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/java.io.tmpdir/Jetty_localhost_36835_datanode____j6hcji/webapp 2023-06-08 18:58:23,700 INFO [Listener at localhost.localdomain/36619] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36835 2023-06-08 18:58:23,706 WARN [Listener at localhost.localdomain/43525] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:58:23,716 WARN [Listener at localhost.localdomain/43525] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:58:23,719 WARN [Listener at localhost.localdomain/43525] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:58:23,720 INFO [Listener at localhost.localdomain/43525] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:58:23,775 INFO [Listener at localhost.localdomain/43525] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/java.io.tmpdir/Jetty_localhost_35531_datanode____.rrhr6w/webapp 2023-06-08 18:58:23,840 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x636e129ba6344f7e: Processing first storage report for DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f from datanode ded236d7-ecf3-416e-8e7f-2ab01d1a3c97 2023-06-08 18:58:23,840 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x636e129ba6344f7e: from storage DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f node DatanodeRegistration(127.0.0.1:40119, datanodeUuid=ded236d7-ecf3-416e-8e7f-2ab01d1a3c97, infoPort=37293, infoSecurePort=0, ipcPort=43525, storageInfo=lv=-57;cid=testClusterID;nsid=1807983113;c=1686250703480), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 18:58:23,840 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x636e129ba6344f7e: Processing first storage report for DS-97a068f9-56ce-40ba-b27c-a5fb74a1ac06 from datanode ded236d7-ecf3-416e-8e7f-2ab01d1a3c97 2023-06-08 18:58:23,840 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x636e129ba6344f7e: from storage DS-97a068f9-56ce-40ba-b27c-a5fb74a1ac06 node DatanodeRegistration(127.0.0.1:40119, datanodeUuid=ded236d7-ecf3-416e-8e7f-2ab01d1a3c97, infoPort=37293, infoSecurePort=0, ipcPort=43525, storageInfo=lv=-57;cid=testClusterID;nsid=1807983113;c=1686250703480), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:58:23,880 INFO [Listener at localhost.localdomain/43525] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35531 2023-06-08 18:58:23,888 WARN [Listener at localhost.localdomain/41149] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:58:23,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ee7d47c483b58d9: Processing first storage report for DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd from datanode f5a0fb88-cdbe-4e65-be49-721f86041691 2023-06-08 18:58:23,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ee7d47c483b58d9: from storage DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd node DatanodeRegistration(127.0.0.1:40371, datanodeUuid=f5a0fb88-cdbe-4e65-be49-721f86041691, infoPort=43179, infoSecurePort=0, ipcPort=41149, storageInfo=lv=-57;cid=testClusterID;nsid=1807983113;c=1686250703480), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:58:23,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ee7d47c483b58d9: Processing first storage report for DS-70b98e14-ff9e-445a-979f-d5785a76ee89 from datanode f5a0fb88-cdbe-4e65-be49-721f86041691 2023-06-08 18:58:23,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ee7d47c483b58d9: from storage DS-70b98e14-ff9e-445a-979f-d5785a76ee89 node DatanodeRegistration(127.0.0.1:40371, datanodeUuid=f5a0fb88-cdbe-4e65-be49-721f86041691, infoPort=43179, infoSecurePort=0, ipcPort=41149, storageInfo=lv=-57;cid=testClusterID;nsid=1807983113;c=1686250703480), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:58:23,997 DEBUG [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa 2023-06-08 18:58:24,000 INFO [Listener at localhost.localdomain/41149] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/zookeeper_0, clientPort=58592, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 18:58:24,002 INFO [Listener at localhost.localdomain/41149] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58592 2023-06-08 18:58:24,002 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,003 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,022 INFO [Listener at localhost.localdomain/41149] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb with version=8 2023-06-08 18:58:24,022 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase-staging 2023-06-08 18:58:24,024 INFO [Listener at localhost.localdomain/41149] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:58:24,024 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:58:24,024 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:58:24,024 INFO [Listener at localhost.localdomain/41149] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:58:24,024 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:58:24,025 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:58:24,025 INFO [Listener at localhost.localdomain/41149] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:58:24,027 INFO [Listener at localhost.localdomain/41149] ipc.NettyRpcServer(120): Bind to /136.243.18.41:36347 2023-06-08 18:58:24,027 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,028 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,029 INFO [Listener at localhost.localdomain/41149] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36347 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-06-08 18:58:24,038 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:363470x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:58:24,040 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36347-0x100abcd479d0000 connected 2023-06-08 18:58:24,049 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:58:24,049 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:58:24,049 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:58:24,050 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36347 2023-06-08 18:58:24,050 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36347 2023-06-08 18:58:24,050 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36347 2023-06-08 18:58:24,050 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36347 2023-06-08 18:58:24,051 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36347 2023-06-08 18:58:24,051 INFO [Listener at localhost.localdomain/41149] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb, hbase.cluster.distributed=false 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:58:24,062 INFO [Listener at localhost.localdomain/41149] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:58:24,064 INFO [Listener at localhost.localdomain/41149] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43115 2023-06-08 18:58:24,064 INFO [Listener at localhost.localdomain/41149] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:58:24,065 DEBUG [Listener at localhost.localdomain/41149] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:58:24,065 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,066 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,067 INFO [Listener at localhost.localdomain/41149] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43115 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-06-08 18:58:24,069 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:431150x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:58:24,070 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:58:24,071 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ZKUtil(164): regionserver:431150x0, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:58:24,071 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43115-0x100abcd479d0001 connected 2023-06-08 18:58:24,072 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ZKUtil(164): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:58:24,072 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ZKUtil(164): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:58:24,072 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43115 2023-06-08 18:58:24,073 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43115 2023-06-08 18:58:24,073 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43115 2023-06-08 18:58:24,073 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43115 2023-06-08 18:58:24,073 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43115 2023-06-08 18:58:24,074 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,076 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:58:24,076 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,077 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:58:24,077 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:58:24,077 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:24,077 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:58:24,078 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,36347,1686250704023 from backup master directory 2023-06-08 18:58:24,078 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:58:24,079 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,079 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:58:24,079 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:58:24,079 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,090 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/hbase.id with ID: c65fb6bd-b5bc-4365-8e49-25cc5c2e4795 2023-06-08 18:58:24,099 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:24,101 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:24,107 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0e9ba322 to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:58:24,111 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53affa4c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:58:24,111 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:58:24,111 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 18:58:24,112 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:58:24,113 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store-tmp 2023-06-08 18:58:24,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:24,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:58:24,123 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:24,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:24,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:58:24,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:24,123 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:58:24,123 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:58:24,124 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/WALs/jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,126 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C36347%2C1686250704023, suffix=, logDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/WALs/jenkins-hbase17.apache.org,36347,1686250704023, archiveDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/oldWALs, maxLogs=10 2023-06-08 18:58:24,134 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/WALs/jenkins-hbase17.apache.org,36347,1686250704023/jenkins-hbase17.apache.org%2C36347%2C1686250704023.1686250704126 2023-06-08 18:58:24,134 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40371,DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd,DISK], DatanodeInfoWithStorage[127.0.0.1:40119,DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f,DISK]] 2023-06-08 18:58:24,134 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:58:24,134 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:24,134 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:58:24,134 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:58:24,136 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:58:24,138 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 18:58:24,138 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 18:58:24,139 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,139 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:58:24,140 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:58:24,142 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:58:24,147 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:58:24,148 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=717822, jitterRate=-0.08724308013916016}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:58:24,148 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:58:24,150 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 18:58:24,151 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 18:58:24,151 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 18:58:24,151 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 18:58:24,152 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 18:58:24,152 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 18:58:24,152 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 18:58:24,157 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 18:58:24,158 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 18:58:24,172 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 18:58:24,172 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 18:58:24,173 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 18:58:24,173 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 18:58:24,173 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 18:58:24,175 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:24,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 18:58:24,175 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 18:58:24,176 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 18:58:24,177 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:58:24,177 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,36347,1686250704023, sessionid=0x100abcd479d0000, setting cluster-up flag (Was=false) 2023-06-08 18:58:24,177 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:58:24,177 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:24,182 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 18:58:24,186 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,188 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:24,190 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 18:58:24,191 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:24,191 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.hbase-snapshot/.tmp 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:58:24,194 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686250734196 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 18:58:24,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,197 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 18:58:24,197 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:58:24,197 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 18:58:24,197 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 18:58:24,197 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 18:58:24,197 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 18:58:24,197 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 18:58:24,197 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250704197,5,FailOnTimeoutGroup] 2023-06-08 18:58:24,197 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250704197,5,FailOnTimeoutGroup] 2023-06-08 18:58:24,197 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,198 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 18:58:24,198 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,198 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,198 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:58:24,207 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:58:24,208 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:58:24,208 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb 2023-06-08 18:58:24,218 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:24,219 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:58:24,221 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info 2023-06-08 18:58:24,221 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:58:24,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,222 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:58:24,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:58:24,223 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:58:24,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:58:24,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/table 2023-06-08 18:58:24,224 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:58:24,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,225 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740 2023-06-08 18:58:24,225 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740 2023-06-08 18:58:24,227 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:58:24,228 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:58:24,229 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:58:24,229 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=783806, jitterRate=-0.0033391565084457397}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:58:24,229 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:58:24,229 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:58:24,229 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:58:24,230 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:58:24,230 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:58:24,230 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:58:24,230 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:58:24,230 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:58:24,231 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:58:24,231 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 18:58:24,231 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 18:58:24,232 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 18:58:24,234 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 18:58:24,277 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(951): ClusterId : c65fb6bd-b5bc-4365-8e49-25cc5c2e4795 2023-06-08 18:58:24,278 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:58:24,282 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:58:24,282 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:58:24,286 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:58:24,287 DEBUG [RS:0;jenkins-hbase17:43115] zookeeper.ReadOnlyZKClient(139): Connect 0x20b478f6 to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:58:24,292 DEBUG [RS:0;jenkins-hbase17:43115] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e738290, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:58:24,292 DEBUG [RS:0;jenkins-hbase17:43115] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a4c1386, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:58:24,302 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:43115 2023-06-08 18:58:24,302 INFO [RS:0;jenkins-hbase17:43115] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:58:24,302 INFO [RS:0;jenkins-hbase17:43115] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:58:24,302 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:58:24,303 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,36347,1686250704023 with isa=jenkins-hbase17.apache.org/136.243.18.41:43115, startcode=1686250704061 2023-06-08 18:58:24,303 DEBUG [RS:0;jenkins-hbase17:43115] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:58:24,307 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:46383, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:58:24,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,308 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb 2023-06-08 18:58:24,308 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36619 2023-06-08 18:58:24,308 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:58:24,309 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:58:24,310 DEBUG [RS:0;jenkins-hbase17:43115] zookeeper.ZKUtil(162): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,310 WARN [RS:0;jenkins-hbase17:43115] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:58:24,310 INFO [RS:0;jenkins-hbase17:43115] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:58:24,310 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,310 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,43115,1686250704061] 2023-06-08 18:58:24,315 DEBUG [RS:0;jenkins-hbase17:43115] zookeeper.ZKUtil(162): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,316 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:58:24,316 INFO [RS:0;jenkins-hbase17:43115] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:58:24,317 INFO [RS:0;jenkins-hbase17:43115] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:58:24,318 INFO [RS:0;jenkins-hbase17:43115] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:58:24,318 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,318 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:58:24,319 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,319 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,319 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,319 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,320 DEBUG [RS:0;jenkins-hbase17:43115] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:58:24,324 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,324 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,324 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,334 INFO [RS:0;jenkins-hbase17:43115] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:58:24,334 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43115,1686250704061-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,345 INFO [RS:0;jenkins-hbase17:43115] regionserver.Replication(203): jenkins-hbase17.apache.org,43115,1686250704061 started 2023-06-08 18:58:24,345 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,43115,1686250704061, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:43115, sessionid=0x100abcd479d0001 2023-06-08 18:58:24,345 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:58:24,345 DEBUG [RS:0;jenkins-hbase17:43115] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,345 DEBUG [RS:0;jenkins-hbase17:43115] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43115,1686250704061' 2023-06-08 18:58:24,345 DEBUG [RS:0;jenkins-hbase17:43115] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:58:24,345 DEBUG [RS:0;jenkins-hbase17:43115] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,43115,1686250704061' 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:58:24,346 DEBUG [RS:0;jenkins-hbase17:43115] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:58:24,346 INFO [RS:0;jenkins-hbase17:43115] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:58:24,346 INFO [RS:0;jenkins-hbase17:43115] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:58:24,384 DEBUG [jenkins-hbase17:36347] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 18:58:24,386 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,43115,1686250704061, state=OPENING 2023-06-08 18:58:24,387 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 18:58:24,388 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:24,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,43115,1686250704061}] 2023-06-08 18:58:24,389 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:58:24,448 INFO [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43115%2C1686250704061, suffix=, logDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061, archiveDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/oldWALs, maxLogs=32 2023-06-08 18:58:24,460 INFO [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250704449 2023-06-08 18:58:24,460 DEBUG [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40371,DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd,DISK], DatanodeInfoWithStorage[127.0.0.1:40119,DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f,DISK]] 2023-06-08 18:58:24,545 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,545 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:58:24,548 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:58:24,552 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 18:58:24,552 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:58:24,554 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43115%2C1686250704061.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061, archiveDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/oldWALs, maxLogs=32 2023-06-08 18:58:24,560 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.meta.1686250704554.meta 2023-06-08 18:58:24,560 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40371,DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd,DISK], DatanodeInfoWithStorage[127.0.0.1:40119,DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f,DISK]] 2023-06-08 18:58:24,560 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:58:24,560 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 18:58:24,560 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 18:58:24,560 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 18:58:24,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 18:58:24,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:24,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 18:58:24,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 18:58:24,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:58:24,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info 2023-06-08 18:58:24,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info 2023-06-08 18:58:24,563 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:58:24,564 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,564 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:58:24,565 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:58:24,565 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:58:24,565 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:58:24,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:58:24,566 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/table 2023-06-08 18:58:24,566 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/table 2023-06-08 18:58:24,566 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:58:24,567 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,567 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740 2023-06-08 18:58:24,568 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740 2023-06-08 18:58:24,570 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:58:24,571 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:58:24,571 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807095, jitterRate=0.026274368166923523}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:58:24,571 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:58:24,573 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686250704545 2023-06-08 18:58:24,577 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 18:58:24,577 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 18:58:24,578 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,43115,1686250704061, state=OPEN 2023-06-08 18:58:24,579 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 18:58:24,579 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:58:24,581 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 18:58:24,581 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,43115,1686250704061 in 190 msec 2023-06-08 18:58:24,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 18:58:24,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 350 msec 2023-06-08 18:58:24,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 391 msec 2023-06-08 18:58:24,585 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686250704585, completionTime=-1 2023-06-08 18:58:24,585 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 18:58:24,585 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 18:58:24,587 DEBUG [hconnection-0x587a7f8b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:58:24,589 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35620, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:58:24,591 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 18:58:24,591 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686250764591 2023-06-08 18:58:24,591 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686250824591 2023-06-08 18:58:24,591 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36347,1686250704023-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36347,1686250704023-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36347,1686250704023-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:36347, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 18:58:24,598 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:58:24,600 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 18:58:24,600 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 18:58:24,602 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:58:24,603 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:58:24,605 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,606 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd empty. 2023-06-08 18:58:24,606 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,607 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 18:58:24,620 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 18:58:24,621 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 281d7f0972be0f385e77be99bf4769cd, NAME => 'hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp 2023-06-08 18:58:24,628 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:24,628 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 281d7f0972be0f385e77be99bf4769cd, disabling compactions & flushes 2023-06-08 18:58:24,628 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,628 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,628 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. after waiting 0 ms 2023-06-08 18:58:24,628 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,628 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,628 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 281d7f0972be0f385e77be99bf4769cd: 2023-06-08 18:58:24,631 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:58:24,632 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250704632"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250704632"}]},"ts":"1686250704632"} 2023-06-08 18:58:24,634 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:58:24,635 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:58:24,636 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250704636"}]},"ts":"1686250704636"} 2023-06-08 18:58:24,637 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 18:58:24,641 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=281d7f0972be0f385e77be99bf4769cd, ASSIGN}] 2023-06-08 18:58:24,643 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=281d7f0972be0f385e77be99bf4769cd, ASSIGN 2023-06-08 18:58:24,644 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=281d7f0972be0f385e77be99bf4769cd, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43115,1686250704061; forceNewPlan=false, retain=false 2023-06-08 18:58:24,796 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=281d7f0972be0f385e77be99bf4769cd, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,796 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250704795"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250704795"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250704795"}]},"ts":"1686250704795"} 2023-06-08 18:58:24,800 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 281d7f0972be0f385e77be99bf4769cd, server=jenkins-hbase17.apache.org,43115,1686250704061}] 2023-06-08 18:58:24,957 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,958 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 281d7f0972be0f385e77be99bf4769cd, NAME => 'hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:58:24,958 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,958 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:24,958 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,958 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,959 INFO [StoreOpener-281d7f0972be0f385e77be99bf4769cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,961 DEBUG [StoreOpener-281d7f0972be0f385e77be99bf4769cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/info 2023-06-08 18:58:24,961 DEBUG [StoreOpener-281d7f0972be0f385e77be99bf4769cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/info 2023-06-08 18:58:24,961 INFO [StoreOpener-281d7f0972be0f385e77be99bf4769cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 281d7f0972be0f385e77be99bf4769cd columnFamilyName info 2023-06-08 18:58:24,962 INFO [StoreOpener-281d7f0972be0f385e77be99bf4769cd-1] regionserver.HStore(310): Store=281d7f0972be0f385e77be99bf4769cd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:24,963 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,964 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,969 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:58:24,972 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:58:24,973 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 281d7f0972be0f385e77be99bf4769cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=867558, jitterRate=0.10315774381160736}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:58:24,973 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 281d7f0972be0f385e77be99bf4769cd: 2023-06-08 18:58:24,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd., pid=6, masterSystemTime=1686250704954 2023-06-08 18:58:24,979 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,979 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:58:24,980 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=281d7f0972be0f385e77be99bf4769cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:24,981 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250704980"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250704980"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250704980"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250704980"}]},"ts":"1686250704980"} 2023-06-08 18:58:24,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 18:58:24,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 281d7f0972be0f385e77be99bf4769cd, server=jenkins-hbase17.apache.org,43115,1686250704061 in 183 msec 2023-06-08 18:58:24,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 18:58:24,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=281d7f0972be0f385e77be99bf4769cd, ASSIGN in 345 msec 2023-06-08 18:58:24,989 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:58:24,989 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250704989"}]},"ts":"1686250704989"} 2023-06-08 18:58:24,990 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 18:58:24,992 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:58:24,993 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 394 msec 2023-06-08 18:58:25,001 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 18:58:25,002 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:58:25,002 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:25,006 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 18:58:25,014 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:58:25,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-06-08 18:58:25,028 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 18:58:25,037 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:58:25,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-08 18:58:25,052 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:58:25,054 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 18:58:25,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.975sec 2023-06-08 18:58:25,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 18:58:25,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 18:58:25,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 18:58:25,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36347,1686250704023-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 18:58:25,054 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,36347,1686250704023-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 18:58:25,056 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 18:58:25,078 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ReadOnlyZKClient(139): Connect 0x0ed2e9bd to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:58:25,081 DEBUG [Listener at localhost.localdomain/41149] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e07ff7c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:58:25,083 DEBUG [hconnection-0x6c47fc08-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:58:25,084 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:35630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:58:25,085 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:58:25,085 INFO [Listener at localhost.localdomain/41149] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:58:25,087 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 18:58:25,087 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:58:25,088 INFO [Listener at localhost.localdomain/41149] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 18:58:25,090 DEBUG [Listener at localhost.localdomain/41149] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 18:58:25,092 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:34222, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 18:58:25,094 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 18:58:25,094 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 18:58:25,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:58:25,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-08 18:58:25,099 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:58:25,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] master.MasterRpcServices(697): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-08 18:58:25,100 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:58:25,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:58:25,101 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,102 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a empty. 2023-06-08 18:58:25,102 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,103 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-08 18:58:25,112 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-08 18:58:25,113 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7bb70d321c62b16faf748011879faa7a, NAME => 'TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/.tmp 2023-06-08 18:58:25,119 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:25,119 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 7bb70d321c62b16faf748011879faa7a, disabling compactions & flushes 2023-06-08 18:58:25,120 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,120 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,120 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. after waiting 0 ms 2023-06-08 18:58:25,120 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,120 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,120 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:25,122 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:58:25,122 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686250705122"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250705122"}]},"ts":"1686250705122"} 2023-06-08 18:58:25,124 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:58:25,124 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:58:25,124 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250705124"}]},"ts":"1686250705124"} 2023-06-08 18:58:25,125 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-08 18:58:25,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, ASSIGN}] 2023-06-08 18:58:25,130 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, ASSIGN 2023-06-08 18:58:25,130 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,43115,1686250704061; forceNewPlan=false, retain=false 2023-06-08 18:58:25,281 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=7bb70d321c62b16faf748011879faa7a, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:25,282 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686250705281"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250705281"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250705281"}]},"ts":"1686250705281"} 2023-06-08 18:58:25,284 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 7bb70d321c62b16faf748011879faa7a, server=jenkins-hbase17.apache.org,43115,1686250704061}] 2023-06-08 18:58:25,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7bb70d321c62b16faf748011879faa7a, NAME => 'TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:58:25,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:25,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,449 INFO [StoreOpener-7bb70d321c62b16faf748011879faa7a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,451 DEBUG [StoreOpener-7bb70d321c62b16faf748011879faa7a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info 2023-06-08 18:58:25,451 DEBUG [StoreOpener-7bb70d321c62b16faf748011879faa7a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info 2023-06-08 18:58:25,451 INFO [StoreOpener-7bb70d321c62b16faf748011879faa7a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7bb70d321c62b16faf748011879faa7a columnFamilyName info 2023-06-08 18:58:25,452 INFO [StoreOpener-7bb70d321c62b16faf748011879faa7a-1] regionserver.HStore(310): Store=7bb70d321c62b16faf748011879faa7a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:25,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:25,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:58:25,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 7bb70d321c62b16faf748011879faa7a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=734358, jitterRate=-0.06621597707271576}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:58:25,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:25,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a., pid=11, masterSystemTime=1686250705437 2023-06-08 18:58:25,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:25,461 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=7bb70d321c62b16faf748011879faa7a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:25,461 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686250705461"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250705461"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250705461"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250705461"}]},"ts":"1686250705461"} 2023-06-08 18:58:25,465 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 18:58:25,465 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 7bb70d321c62b16faf748011879faa7a, server=jenkins-hbase17.apache.org,43115,1686250704061 in 179 msec 2023-06-08 18:58:25,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 18:58:25,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, ASSIGN in 337 msec 2023-06-08 18:58:25,468 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:58:25,468 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250705468"}]},"ts":"1686250705468"} 2023-06-08 18:58:25,470 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-08 18:58:25,473 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:58:25,476 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 379 msec 2023-06-08 18:58:28,213 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 18:58:30,316 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 18:58:30,317 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 18:58:30,317 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-08 18:58:35,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36347] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 18:58:35,103 INFO [Listener at localhost.localdomain/41149] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-08 18:58:35,107 DEBUG [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-08 18:58:35,107 DEBUG [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:35,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:35,127 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7bb70d321c62b16faf748011879faa7a 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:58:35,138 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/70616093c6704f1f907edfaad7f12676 2023-06-08 18:58:35,147 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/70616093c6704f1f907edfaad7f12676 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/70616093c6704f1f907edfaad7f12676 2023-06-08 18:58:35,153 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/70616093c6704f1f907edfaad7f12676, entries=7, sequenceid=11, filesize=12.1 K 2023-06-08 18:58:35,154 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 7bb70d321c62b16faf748011879faa7a in 27ms, sequenceid=11, compaction requested=false 2023-06-08 18:58:35,155 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:35,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:35,155 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7bb70d321c62b16faf748011879faa7a 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-08 18:58:35,167 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/61e9f5e5f0724e21bccc4326ea0741af 2023-06-08 18:58:35,174 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/61e9f5e5f0724e21bccc4326ea0741af as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af 2023-06-08 18:58:35,179 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af, entries=17, sequenceid=31, filesize=22.6 K 2023-06-08 18:58:35,180 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=8.41 KB/8608 for 7bb70d321c62b16faf748011879faa7a in 25ms, sequenceid=31, compaction requested=false 2023-06-08 18:58:35,180 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:35,180 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=34.8 K, sizeToCheck=16.0 K 2023-06-08 18:58:35,180 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:58:35,180 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af because midkey is the same as first or last row 2023-06-08 18:58:37,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:37,172 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7bb70d321c62b16faf748011879faa7a 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-08 18:58:37,199 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=43 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/9fec5c26e1774bc1895eb1a5a2a91c88 2023-06-08 18:58:37,206 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/9fec5c26e1774bc1895eb1a5a2a91c88 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/9fec5c26e1774bc1895eb1a5a2a91c88 2023-06-08 18:58:37,213 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/9fec5c26e1774bc1895eb1a5a2a91c88, entries=9, sequenceid=43, filesize=14.2 K 2023-06-08 18:58:37,213 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=18.91 KB/19368 for 7bb70d321c62b16faf748011879faa7a in 42ms, sequenceid=43, compaction requested=true 2023-06-08 18:58:37,214 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:37,214 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=49.0 K, sizeToCheck=16.0 K 2023-06-08 18:58:37,214 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:58:37,214 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af because midkey is the same as first or last row 2023-06-08 18:58:37,214 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:37,214 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:58:37,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:37,215 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7bb70d321c62b16faf748011879faa7a 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-08 18:58:37,216 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 50141 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:58:37,217 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): 7bb70d321c62b16faf748011879faa7a/info is initiating minor compaction (all files) 2023-06-08 18:58:37,217 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 7bb70d321c62b16faf748011879faa7a/info in TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:37,217 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/70616093c6704f1f907edfaad7f12676, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/9fec5c26e1774bc1895eb1a5a2a91c88] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp, totalSize=49.0 K 2023-06-08 18:58:37,218 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 70616093c6704f1f907edfaad7f12676, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686250715112 2023-06-08 18:58:37,219 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 61e9f5e5f0724e21bccc4326ea0741af, keycount=17, bloomtype=ROW, size=22.6 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1686250715129 2023-06-08 18:58:37,220 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 9fec5c26e1774bc1895eb1a5a2a91c88, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1686250715155 2023-06-08 18:58:37,234 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=7bb70d321c62b16faf748011879faa7a, server=jenkins-hbase17.apache.org,43115,1686250704061 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 18:58:37,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] ipc.CallRunner(144): callId: 71 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35630 deadline: 1686250727234, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=7bb70d321c62b16faf748011879faa7a, server=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:37,245 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=65 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/543281d300014b7b835c579ad9586736 2023-06-08 18:58:37,251 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): 7bb70d321c62b16faf748011879faa7a#info#compaction#31 average throughput is 16.93 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:58:37,253 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/543281d300014b7b835c579ad9586736 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736 2023-06-08 18:58:37,262 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736, entries=19, sequenceid=65, filesize=24.7 K 2023-06-08 18:58:37,263 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 7bb70d321c62b16faf748011879faa7a in 48ms, sequenceid=65, compaction requested=false 2023-06-08 18:58:37,263 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:37,263 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=73.7 K, sizeToCheck=16.0 K 2023-06-08 18:58:37,263 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:58:37,263 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736 because midkey is the same as first or last row 2023-06-08 18:58:37,265 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/ef73440ed1fd4de3bea28444c1f9a15f as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f 2023-06-08 18:58:37,272 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 7bb70d321c62b16faf748011879faa7a/info of 7bb70d321c62b16faf748011879faa7a into ef73440ed1fd4de3bea28444c1f9a15f(size=39.6 K), total size for store is 64.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:58:37,272 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:37,273 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a., storeName=7bb70d321c62b16faf748011879faa7a/info, priority=13, startTime=1686250717214; duration=0sec 2023-06-08 18:58:37,273 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=64.4 K, sizeToCheck=16.0 K 2023-06-08 18:58:37,273 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:58:37,274 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f because midkey is the same as first or last row 2023-06-08 18:58:37,274 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:47,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,289 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 7bb70d321c62b16faf748011879faa7a 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-06-08 18:58:47,312 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/03eef4c6ed9e4ba9bcd9700e31e8ec86 2023-06-08 18:58:47,320 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/03eef4c6ed9e4ba9bcd9700e31e8ec86 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/03eef4c6ed9e4ba9bcd9700e31e8ec86 2023-06-08 18:58:47,327 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/03eef4c6ed9e4ba9bcd9700e31e8ec86, entries=11, sequenceid=80, filesize=16.3 K 2023-06-08 18:58:47,328 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=1.05 KB/1076 for 7bb70d321c62b16faf748011879faa7a in 39ms, sequenceid=80, compaction requested=true 2023-06-08 18:58:47,329 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:47,329 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=80.7 K, sizeToCheck=16.0 K 2023-06-08 18:58:47,329 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:58:47,329 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f because midkey is the same as first or last row 2023-06-08 18:58:47,329 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:47,329 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:58:47,331 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82610 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:58:47,331 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): 7bb70d321c62b16faf748011879faa7a/info is initiating minor compaction (all files) 2023-06-08 18:58:47,331 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 7bb70d321c62b16faf748011879faa7a/info in TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:47,332 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/03eef4c6ed9e4ba9bcd9700e31e8ec86] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp, totalSize=80.7 K 2023-06-08 18:58:47,332 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting ef73440ed1fd4de3bea28444c1f9a15f, keycount=33, bloomtype=ROW, size=39.6 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1686250715112 2023-06-08 18:58:47,333 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 543281d300014b7b835c579ad9586736, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=65, earliestPutTs=1686250717175 2023-06-08 18:58:47,333 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 03eef4c6ed9e4ba9bcd9700e31e8ec86, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1686250717216 2023-06-08 18:58:47,346 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): 7bb70d321c62b16faf748011879faa7a#info#compaction#33 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:58:47,358 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/76ee1f39b051458c9be973f5d17f864e as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e 2023-06-08 18:58:47,363 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 7bb70d321c62b16faf748011879faa7a/info of 7bb70d321c62b16faf748011879faa7a into 76ee1f39b051458c9be973f5d17f864e(size=71.4 K), total size for store is 71.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:58:47,363 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:47,363 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a., storeName=7bb70d321c62b16faf748011879faa7a/info, priority=13, startTime=1686250727329; duration=0sec 2023-06-08 18:58:47,363 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.4 K, sizeToCheck=16.0 K 2023-06-08 18:58:47,363 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 18:58:47,364 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:47,364 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:47,365 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36347] assignment.AssignmentManager(1140): Split request from jenkins-hbase17.apache.org,43115,1686250704061, parent={ENCODED => 7bb70d321c62b16faf748011879faa7a, NAME => 'TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-08 18:58:47,370 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36347] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:47,375 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36347] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=7bb70d321c62b16faf748011879faa7a, daughterA=c873d26ccca20e5c8b0cb6b968f48772, daughterB=a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:47,376 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=7bb70d321c62b16faf748011879faa7a, daughterA=c873d26ccca20e5c8b0cb6b968f48772, daughterB=a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:47,376 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=7bb70d321c62b16faf748011879faa7a, daughterA=c873d26ccca20e5c8b0cb6b968f48772, daughterB=a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:47,376 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=7bb70d321c62b16faf748011879faa7a, daughterA=c873d26ccca20e5c8b0cb6b968f48772, daughterB=a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:47,384 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, UNASSIGN}] 2023-06-08 18:58:47,385 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, UNASSIGN 2023-06-08 18:58:47,386 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7bb70d321c62b16faf748011879faa7a, regionState=CLOSING, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:47,387 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686250727386"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250727386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250727386"}]},"ts":"1686250727386"} 2023-06-08 18:58:47,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 7bb70d321c62b16faf748011879faa7a, server=jenkins-hbase17.apache.org,43115,1686250704061}] 2023-06-08 18:58:47,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(111): Close 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,550 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 7bb70d321c62b16faf748011879faa7a, disabling compactions & flushes 2023-06-08 18:58:47,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:47,550 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:47,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. after waiting 0 ms 2023-06-08 18:58:47,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:47,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 7bb70d321c62b16faf748011879faa7a 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:58:47,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=85 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/6d096f73787d47c18f45a9705173f797 2023-06-08 18:58:47,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.tmp/info/6d096f73787d47c18f45a9705173f797 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/6d096f73787d47c18f45a9705173f797 2023-06-08 18:58:47,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/6d096f73787d47c18f45a9705173f797, entries=1, sequenceid=85, filesize=5.8 K 2023-06-08 18:58:47,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 7bb70d321c62b16faf748011879faa7a in 33ms, sequenceid=85, compaction requested=false 2023-06-08 18:58:47,592 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/70616093c6704f1f907edfaad7f12676, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/9fec5c26e1774bc1895eb1a5a2a91c88, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/03eef4c6ed9e4ba9bcd9700e31e8ec86] to archive 2023-06-08 18:58:47,593 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 18:58:47,595 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/70616093c6704f1f907edfaad7f12676 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/70616093c6704f1f907edfaad7f12676 2023-06-08 18:58:47,596 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/61e9f5e5f0724e21bccc4326ea0741af 2023-06-08 18:58:47,597 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/ef73440ed1fd4de3bea28444c1f9a15f 2023-06-08 18:58:47,599 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/9fec5c26e1774bc1895eb1a5a2a91c88 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/9fec5c26e1774bc1895eb1a5a2a91c88 2023-06-08 18:58:47,601 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/543281d300014b7b835c579ad9586736 2023-06-08 18:58:47,602 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/03eef4c6ed9e4ba9bcd9700e31e8ec86 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/03eef4c6ed9e4ba9bcd9700e31e8ec86 2023-06-08 18:58:47,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=1 2023-06-08 18:58:47,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. 2023-06-08 18:58:47,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 7bb70d321c62b16faf748011879faa7a: 2023-06-08 18:58:47,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.UnassignRegionHandler(149): Closed 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,614 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7bb70d321c62b16faf748011879faa7a, regionState=CLOSED 2023-06-08 18:58:47,615 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686250727614"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250727614"}]},"ts":"1686250727614"} 2023-06-08 18:58:47,617 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-08 18:58:47,618 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 7bb70d321c62b16faf748011879faa7a, server=jenkins-hbase17.apache.org,43115,1686250704061 in 228 msec 2023-06-08 18:58:47,619 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-08 18:58:47,619 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=7bb70d321c62b16faf748011879faa7a, UNASSIGN in 234 msec 2023-06-08 18:58:47,631 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=7bb70d321c62b16faf748011879faa7a, threads=2 2023-06-08 18:58:47,632 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/6d096f73787d47c18f45a9705173f797 for region: 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,632 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e for region: 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,642 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/6d096f73787d47c18f45a9705173f797, top=true 2023-06-08 18:58:47,656 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/.splits/a8913070c8a542b8076a02b10e0081b5/info/TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797 for child: a8913070c8a542b8076a02b10e0081b5, parent: 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,656 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/6d096f73787d47c18f45a9705173f797 for region: 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,670 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e for region: 7bb70d321c62b16faf748011879faa7a 2023-06-08 18:58:47,670 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 7bb70d321c62b16faf748011879faa7a Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-06-08 18:58:47,698 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-06-08 18:58:47,701 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-06-08 18:58:47,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686250727703"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1686250727703"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1686250727703"}]},"ts":"1686250727703"} 2023-06-08 18:58:47,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686250727703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250727703"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250727703"}]},"ts":"1686250727703"} 2023-06-08 18:58:47,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686250727703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250727703"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250727703"}]},"ts":"1686250727703"} 2023-06-08 18:58:47,741 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43115] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-08 18:58:47,741 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-08 18:58:47,741 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-08 18:58:47,751 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/.tmp/info/2cf30a682e444689b9568fb524c7ff3a 2023-06-08 18:58:47,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c873d26ccca20e5c8b0cb6b968f48772, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a8913070c8a542b8076a02b10e0081b5, ASSIGN}] 2023-06-08 18:58:47,754 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a8913070c8a542b8076a02b10e0081b5, ASSIGN 2023-06-08 18:58:47,755 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c873d26ccca20e5c8b0cb6b968f48772, ASSIGN 2023-06-08 18:58:47,756 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a8913070c8a542b8076a02b10e0081b5, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase17.apache.org,43115,1686250704061; forceNewPlan=false, retain=false 2023-06-08 18:58:47,756 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c873d26ccca20e5c8b0cb6b968f48772, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase17.apache.org,43115,1686250704061; forceNewPlan=false, retain=false 2023-06-08 18:58:47,774 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/.tmp/table/c233bd29872d49ab8ef772a95f0aa387 2023-06-08 18:58:47,790 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/.tmp/info/2cf30a682e444689b9568fb524c7ff3a as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info/2cf30a682e444689b9568fb524c7ff3a 2023-06-08 18:58:47,799 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info/2cf30a682e444689b9568fb524c7ff3a, entries=29, sequenceid=17, filesize=8.6 K 2023-06-08 18:58:47,801 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/.tmp/table/c233bd29872d49ab8ef772a95f0aa387 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/table/c233bd29872d49ab8ef772a95f0aa387 2023-06-08 18:58:47,812 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/table/c233bd29872d49ab8ef772a95f0aa387, entries=4, sequenceid=17, filesize=4.8 K 2023-06-08 18:58:47,814 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 72ms, sequenceid=17, compaction requested=false 2023-06-08 18:58:47,815 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-08 18:58:47,908 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=c873d26ccca20e5c8b0cb6b968f48772, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:47,908 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=a8913070c8a542b8076a02b10e0081b5, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:47,908 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686250727908"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250727908"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250727908"}]},"ts":"1686250727908"} 2023-06-08 18:58:47,908 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686250727908"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250727908"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250727908"}]},"ts":"1686250727908"} 2023-06-08 18:58:47,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure c873d26ccca20e5c8b0cb6b968f48772, server=jenkins-hbase17.apache.org,43115,1686250704061}] 2023-06-08 18:58:47,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061}] 2023-06-08 18:58:48,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:58:48,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c873d26ccca20e5c8b0cb6b968f48772, NAME => 'TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-08 18:58:48,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:48,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,068 INFO [StoreOpener-c873d26ccca20e5c8b0cb6b968f48772-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,069 DEBUG [StoreOpener-c873d26ccca20e5c8b0cb6b968f48772-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info 2023-06-08 18:58:48,069 DEBUG [StoreOpener-c873d26ccca20e5c8b0cb6b968f48772-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info 2023-06-08 18:58:48,069 INFO [StoreOpener-c873d26ccca20e5c8b0cb6b968f48772-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c873d26ccca20e5c8b0cb6b968f48772 columnFamilyName info 2023-06-08 18:58:48,083 DEBUG [StoreOpener-c873d26ccca20e5c8b0cb6b968f48772-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a->hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e-bottom 2023-06-08 18:58:48,083 INFO [StoreOpener-c873d26ccca20e5c8b0cb6b968f48772-1] regionserver.HStore(310): Store=c873d26ccca20e5c8b0cb6b968f48772/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:48,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:58:48,090 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened c873d26ccca20e5c8b0cb6b968f48772; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=742390, jitterRate=-0.0560033917427063}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:58:48,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for c873d26ccca20e5c8b0cb6b968f48772: 2023-06-08 18:58:48,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772., pid=17, masterSystemTime=1686250728062 2023-06-08 18:58:48,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:48,092 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-08 18:58:48,092 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:58:48,092 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): c873d26ccca20e5c8b0cb6b968f48772/info is initiating minor compaction (all files) 2023-06-08 18:58:48,093 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c873d26ccca20e5c8b0cb6b968f48772/info in TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:58:48,093 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a->hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e-bottom] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/.tmp, totalSize=71.4 K 2023-06-08 18:58:48,093 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1686250715112 2023-06-08 18:58:48,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:58:48,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:58:48,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:58:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a8913070c8a542b8076a02b10e0081b5, NAME => 'TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-08 18:58:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,094 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=c873d26ccca20e5c8b0cb6b968f48772, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:58:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,094 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686250728094"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250728094"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250728094"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250728094"}]},"ts":"1686250728094"} 2023-06-08 18:58:48,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,095 INFO [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,096 DEBUG [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info 2023-06-08 18:58:48,097 DEBUG [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info 2023-06-08 18:58:48,097 INFO [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a8913070c8a542b8076a02b10e0081b5 columnFamilyName info 2023-06-08 18:58:48,100 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-06-08 18:58:48,100 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure c873d26ccca20e5c8b0cb6b968f48772, server=jenkins-hbase17.apache.org,43115,1686250704061 in 186 msec 2023-06-08 18:58:48,101 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): c873d26ccca20e5c8b0cb6b968f48772#info#compaction#37 average throughput is 20.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:58:48,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c873d26ccca20e5c8b0cb6b968f48772, ASSIGN in 347 msec 2023-06-08 18:58:48,113 DEBUG [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a->hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e-top 2023-06-08 18:58:48,124 DEBUG [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797 2023-06-08 18:58:48,124 INFO [StoreOpener-a8913070c8a542b8076a02b10e0081b5-1] regionserver.HStore(310): Store=a8913070c8a542b8076a02b10e0081b5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:58:48,125 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/.tmp/info/1e320328cfa14554bfaae59bf267fea1 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info/1e320328cfa14554bfaae59bf267fea1 2023-06-08 18:58:48,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:48,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened a8913070c8a542b8076a02b10e0081b5; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=730319, jitterRate=-0.07135185599327087}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:58:48,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:58:48,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., pid=18, masterSystemTime=1686250728062 2023-06-08 18:58:48,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:48,133 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-06-08 18:58:48,134 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in c873d26ccca20e5c8b0cb6b968f48772/info of c873d26ccca20e5c8b0cb6b968f48772 into 1e320328cfa14554bfaae59bf267fea1(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:58:48,134 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c873d26ccca20e5c8b0cb6b968f48772: 2023-06-08 18:58:48,134 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772., storeName=c873d26ccca20e5c8b0cb6b968f48772/info, priority=15, startTime=1686250728091; duration=0sec 2023-06-08 18:58:48,134 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:48,135 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:58:48,135 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:58:48,135 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:58:48,135 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a->hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e-top, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=77.2 K 2023-06-08 18:58:48,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:58:48,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:58:48,136 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting 76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1686250715112 2023-06-08 18:58:48,136 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=a8913070c8a542b8076a02b10e0081b5, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:48,136 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1686250727290 2023-06-08 18:58:48,136 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686250728136"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250728136"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250728136"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250728136"}]},"ts":"1686250728136"} 2023-06-08 18:58:48,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-06-08 18:58:48,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 in 228 msec 2023-06-08 18:58:48,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-06-08 18:58:48,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a8913070c8a542b8076a02b10e0081b5, ASSIGN in 388 msec 2023-06-08 18:58:48,146 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=7bb70d321c62b16faf748011879faa7a, daughterA=c873d26ccca20e5c8b0cb6b968f48772, daughterB=a8913070c8a542b8076a02b10e0081b5 in 774 msec 2023-06-08 18:58:48,156 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#38 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:58:48,183 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/6fcc58b3268d4034854f790fc52e2d32 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6fcc58b3268d4034854f790fc52e2d32 2023-06-08 18:58:48,192 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into 6fcc58b3268d4034854f790fc52e2d32(size=8.1 K), total size for store is 8.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:58:48,193 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:58:48,193 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=14, startTime=1686250728131; duration=0sec 2023-06-08 18:58:48,193 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:58:49,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35630 deadline: 1686250739295, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1686250705093.7bb70d321c62b16faf748011879faa7a. is not online on jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:58:53,179 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 18:58:59,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:59,343 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:58:59,353 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=99 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/2dff9f40227242cd87d1b925f45b998f 2023-06-08 18:58:59,359 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/2dff9f40227242cd87d1b925f45b998f as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/2dff9f40227242cd87d1b925f45b998f 2023-06-08 18:58:59,364 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/2dff9f40227242cd87d1b925f45b998f, entries=7, sequenceid=99, filesize=12.1 K 2023-06-08 18:58:59,365 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for a8913070c8a542b8076a02b10e0081b5 in 22ms, sequenceid=99, compaction requested=false 2023-06-08 18:58:59,365 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:58:59,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:58:59,366 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-08 18:58:59,377 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=121 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/d7a72fb1b5db48cb9e9064ad5a835928 2023-06-08 18:58:59,381 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/d7a72fb1b5db48cb9e9064ad5a835928 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d7a72fb1b5db48cb9e9064ad5a835928 2023-06-08 18:58:59,387 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d7a72fb1b5db48cb9e9064ad5a835928, entries=19, sequenceid=121, filesize=24.7 K 2023-06-08 18:58:59,388 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=6.30 KB/6456 for a8913070c8a542b8076a02b10e0081b5 in 22ms, sequenceid=121, compaction requested=true 2023-06-08 18:58:59,388 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:58:59,388 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 18:58:59,388 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:58:59,390 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 45991 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:58:59,390 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:58:59,390 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:58:59,390 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6fcc58b3268d4034854f790fc52e2d32, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/2dff9f40227242cd87d1b925f45b998f, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d7a72fb1b5db48cb9e9064ad5a835928] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=44.9 K 2023-06-08 18:58:59,390 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 6fcc58b3268d4034854f790fc52e2d32, keycount=3, bloomtype=ROW, size=8.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1686250717233 2023-06-08 18:58:59,391 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 2dff9f40227242cd87d1b925f45b998f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=99, earliestPutTs=1686250739336 2023-06-08 18:58:59,391 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting d7a72fb1b5db48cb9e9064ad5a835928, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=121, earliestPutTs=1686250739344 2023-06-08 18:58:59,402 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#41 average throughput is 29.76 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:58:59,415 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/b30b8becffac4047bb8a22499004e981 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/b30b8becffac4047bb8a22499004e981 2023-06-08 18:58:59,420 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into b30b8becffac4047bb8a22499004e981(size=35.6 K), total size for store is 35.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:58:59,420 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:58:59,420 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250739388; duration=0sec 2023-06-08 18:58:59,420 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:01,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:01,379 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:59:01,392 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/8af9347ddfda4a9bb950e3b94205456b 2023-06-08 18:59:01,398 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/8af9347ddfda4a9bb950e3b94205456b as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8af9347ddfda4a9bb950e3b94205456b 2023-06-08 18:59:01,404 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8af9347ddfda4a9bb950e3b94205456b, entries=7, sequenceid=132, filesize=12.1 K 2023-06-08 18:59:01,405 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for a8913070c8a542b8076a02b10e0081b5 in 26ms, sequenceid=132, compaction requested=false 2023-06-08 18:59:01,405 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:01,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:01,405 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-08 18:59:01,428 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=152 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/a668ceb53be54751b6e8b058154bb78e 2023-06-08 18:59:01,429 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 18:59:01,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35630 deadline: 1686250751429, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:01,433 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/a668ceb53be54751b6e8b058154bb78e as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a668ceb53be54751b6e8b058154bb78e 2023-06-08 18:59:01,438 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a668ceb53be54751b6e8b058154bb78e, entries=17, sequenceid=152, filesize=22.7 K 2023-06-08 18:59:01,439 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=12.61 KB/12912 for a8913070c8a542b8076a02b10e0081b5 in 34ms, sequenceid=152, compaction requested=true 2023-06-08 18:59:01,439 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:01,439 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:01,439 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:59:01,440 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 72020 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:59:01,440 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:59:01,440 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:01,440 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/b30b8becffac4047bb8a22499004e981, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8af9347ddfda4a9bb950e3b94205456b, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a668ceb53be54751b6e8b058154bb78e] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=70.3 K 2023-06-08 18:59:01,441 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting b30b8becffac4047bb8a22499004e981, keycount=29, bloomtype=ROW, size=35.6 K, encoding=NONE, compression=NONE, seqNum=121, earliestPutTs=1686250717233 2023-06-08 18:59:01,441 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 8af9347ddfda4a9bb950e3b94205456b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1686250739367 2023-06-08 18:59:01,441 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting a668ceb53be54751b6e8b058154bb78e, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=152, earliestPutTs=1686250741381 2023-06-08 18:59:01,452 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#44 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:59:01,471 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/5ad19e1904a04bd9964cff8f9bb389ac as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/5ad19e1904a04bd9964cff8f9bb389ac 2023-06-08 18:59:01,483 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into 5ad19e1904a04bd9964cff8f9bb389ac(size=61.0 K), total size for store is 61.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:59:01,483 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:01,483 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250741439; duration=0sec 2023-06-08 18:59:01,483 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:06,971 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-08 18:59:06,971 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=11, reused chunk count=35, reuseRatio=76.09% 2023-06-08 18:59:11,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:11,526 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=13.66 KB heapSize=14.88 KB 2023-06-08 18:59:11,564 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=13.66 KB at sequenceid=169 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/6df644da552c47b38cfd97703e665b4d 2023-06-08 18:59:11,584 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/6df644da552c47b38cfd97703e665b4d as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6df644da552c47b38cfd97703e665b4d 2023-06-08 18:59:11,592 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6df644da552c47b38cfd97703e665b4d, entries=13, sequenceid=169, filesize=18.4 K 2023-06-08 18:59:11,593 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~13.66 KB/13988, heapSize ~14.86 KB/15216, currentSize=1.05 KB/1076 for a8913070c8a542b8076a02b10e0081b5 in 67ms, sequenceid=169, compaction requested=false 2023-06-08 18:59:11,593 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:13,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:13,540 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:59:13,558 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=179 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/55fe4072439741c8803b51ea68ea7b63 2023-06-08 18:59:13,565 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/55fe4072439741c8803b51ea68ea7b63 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/55fe4072439741c8803b51ea68ea7b63 2023-06-08 18:59:13,571 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/55fe4072439741c8803b51ea68ea7b63, entries=7, sequenceid=179, filesize=12.1 K 2023-06-08 18:59:13,572 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for a8913070c8a542b8076a02b10e0081b5 in 32ms, sequenceid=179, compaction requested=true 2023-06-08 18:59:13,572 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:13,572 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:13,572 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:59:13,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:13,574 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93750 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:59:13,574 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:59:13,574 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:13,574 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/5ad19e1904a04bd9964cff8f9bb389ac, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6df644da552c47b38cfd97703e665b4d, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/55fe4072439741c8803b51ea68ea7b63] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=91.6 K 2023-06-08 18:59:13,574 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-08 18:59:13,575 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 5ad19e1904a04bd9964cff8f9bb389ac, keycount=53, bloomtype=ROW, size=61.0 K, encoding=NONE, compression=NONE, seqNum=152, earliestPutTs=1686250717233 2023-06-08 18:59:13,575 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 6df644da552c47b38cfd97703e665b4d, keycount=13, bloomtype=ROW, size=18.4 K, encoding=NONE, compression=NONE, seqNum=169, earliestPutTs=1686250741406 2023-06-08 18:59:13,576 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 55fe4072439741c8803b51ea68ea7b63, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1686250751527 2023-06-08 18:59:13,596 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#48 average throughput is 74.91 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:59:13,610 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=202 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/d09965e9be304906a689bd19a5f0e21a 2023-06-08 18:59:13,616 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/d09965e9be304906a689bd19a5f0e21a as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d09965e9be304906a689bd19a5f0e21a 2023-06-08 18:59:13,621 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d09965e9be304906a689bd19a5f0e21a, entries=20, sequenceid=202, filesize=25.8 K 2023-06-08 18:59:13,624 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/69a8b5f5ae294d739cd23391cd24bc37 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/69a8b5f5ae294d739cd23391cd24bc37 2023-06-08 18:59:13,624 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=6.30 KB/6456 for a8913070c8a542b8076a02b10e0081b5 in 50ms, sequenceid=202, compaction requested=false 2023-06-08 18:59:13,624 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:13,631 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into 69a8b5f5ae294d739cd23391cd24bc37(size=82.2 K), total size for store is 108.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:59:13,631 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:13,631 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250753572; duration=0sec 2023-06-08 18:59:13,631 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:14,003 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 18:59:15,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:15,604 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:59:15,617 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=213 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/65fbd20a600a46bf9630f028b5854a32 2023-06-08 18:59:15,627 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/65fbd20a600a46bf9630f028b5854a32 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/65fbd20a600a46bf9630f028b5854a32 2023-06-08 18:59:15,635 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/65fbd20a600a46bf9630f028b5854a32, entries=7, sequenceid=213, filesize=12.1 K 2023-06-08 18:59:15,636 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=13.66 KB/13988 for a8913070c8a542b8076a02b10e0081b5 in 32ms, sequenceid=213, compaction requested=true 2023-06-08 18:59:15,636 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:15,637 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:59:15,637 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:15,638 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 123035 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:59:15,638 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:59:15,638 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:15,638 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/69a8b5f5ae294d739cd23391cd24bc37, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d09965e9be304906a689bd19a5f0e21a, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/65fbd20a600a46bf9630f028b5854a32] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=120.2 K 2023-06-08 18:59:15,639 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting 69a8b5f5ae294d739cd23391cd24bc37, keycount=73, bloomtype=ROW, size=82.2 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1686250717233 2023-06-08 18:59:15,639 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting d09965e9be304906a689bd19a5f0e21a, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=202, earliestPutTs=1686250753540 2023-06-08 18:59:15,640 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting 65fbd20a600a46bf9630f028b5854a32, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1686250753577 2023-06-08 18:59:15,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:15,644 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=15.76 KB heapSize=17.13 KB 2023-06-08 18:59:15,659 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#51 average throughput is 34.21 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:59:15,676 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 18:59:15,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35630 deadline: 1686250765675, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:15,679 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.76 KB at sequenceid=231 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/de4e171948be47e998f8978847ba6c7c 2023-06-08 18:59:15,682 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/19cfbb8f85a94ce9a672d835565e3c1e as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/19cfbb8f85a94ce9a672d835565e3c1e 2023-06-08 18:59:15,684 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/de4e171948be47e998f8978847ba6c7c as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/de4e171948be47e998f8978847ba6c7c 2023-06-08 18:59:15,694 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into 19cfbb8f85a94ce9a672d835565e3c1e(size=110.7 K), total size for store is 110.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:59:15,694 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:15,694 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250755637; duration=0sec 2023-06-08 18:59:15,694 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:15,695 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/de4e171948be47e998f8978847ba6c7c, entries=15, sequenceid=231, filesize=20.6 K 2023-06-08 18:59:15,696 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~15.76 KB/16140, heapSize ~17.11 KB/17520, currentSize=14.71 KB/15064 for a8913070c8a542b8076a02b10e0081b5 in 52ms, sequenceid=231, compaction requested=false 2023-06-08 18:59:15,696 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:25,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:25,697 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=15.76 KB heapSize=17.13 KB 2023-06-08 18:59:25,713 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.76 KB at sequenceid=250 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/3abb57d1ead940f0b0d9174f59288046 2023-06-08 18:59:25,722 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/3abb57d1ead940f0b0d9174f59288046 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/3abb57d1ead940f0b0d9174f59288046 2023-06-08 18:59:25,727 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/3abb57d1ead940f0b0d9174f59288046, entries=15, sequenceid=250, filesize=20.6 K 2023-06-08 18:59:25,728 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~15.76 KB/16140, heapSize ~17.11 KB/17520, currentSize=1.05 KB/1076 for a8913070c8a542b8076a02b10e0081b5 in 31ms, sequenceid=250, compaction requested=true 2023-06-08 18:59:25,728 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:25,728 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:25,728 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:59:25,729 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155497 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:59:25,730 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:59:25,730 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:25,730 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/19cfbb8f85a94ce9a672d835565e3c1e, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/de4e171948be47e998f8978847ba6c7c, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/3abb57d1ead940f0b0d9174f59288046] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=151.9 K 2023-06-08 18:59:25,730 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 19cfbb8f85a94ce9a672d835565e3c1e, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1686250717233 2023-06-08 18:59:25,730 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting de4e171948be47e998f8978847ba6c7c, keycount=15, bloomtype=ROW, size=20.6 K, encoding=NONE, compression=NONE, seqNum=231, earliestPutTs=1686250755605 2023-06-08 18:59:25,731 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 3abb57d1ead940f0b0d9174f59288046, keycount=15, bloomtype=ROW, size=20.6 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1686250755645 2023-06-08 18:59:25,799 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#53 average throughput is 44.47 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:59:25,808 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/a920d16ee211413591f484e047faa85a as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a920d16ee211413591f484e047faa85a 2023-06-08 18:59:25,813 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into a920d16ee211413591f484e047faa85a(size=142.6 K), total size for store is 142.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:59:25,813 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:25,813 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250765728; duration=0sec 2023-06-08 18:59:25,813 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:27,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:27,715 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 18:59:27,724 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=261 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/726b220e0a0041b38412dd5b685f9b81 2023-06-08 18:59:27,729 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/726b220e0a0041b38412dd5b685f9b81 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/726b220e0a0041b38412dd5b685f9b81 2023-06-08 18:59:27,735 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/726b220e0a0041b38412dd5b685f9b81, entries=7, sequenceid=261, filesize=12.1 K 2023-06-08 18:59:27,735 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for a8913070c8a542b8076a02b10e0081b5 in 20ms, sequenceid=261, compaction requested=false 2023-06-08 18:59:27,735 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:27,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:27,736 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-06-08 18:59:27,744 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=282 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/a2f7f62cf1c547af928b7215a4bd4041 2023-06-08 18:59:27,749 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/a2f7f62cf1c547af928b7215a4bd4041 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a2f7f62cf1c547af928b7215a4bd4041 2023-06-08 18:59:27,754 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a2f7f62cf1c547af928b7215a4bd4041, entries=18, sequenceid=282, filesize=23.7 K 2023-06-08 18:59:27,755 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=8.41 KB/8608 for a8913070c8a542b8076a02b10e0081b5 in 19ms, sequenceid=282, compaction requested=true 2023-06-08 18:59:27,755 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:27,755 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 18:59:27,755 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:59:27,756 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 182765 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:59:27,756 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:59:27,756 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:27,756 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a920d16ee211413591f484e047faa85a, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/726b220e0a0041b38412dd5b685f9b81, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a2f7f62cf1c547af928b7215a4bd4041] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=178.5 K 2023-06-08 18:59:27,757 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting a920d16ee211413591f484e047faa85a, keycount=130, bloomtype=ROW, size=142.6 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1686250717233 2023-06-08 18:59:27,757 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting 726b220e0a0041b38412dd5b685f9b81, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=261, earliestPutTs=1686250765698 2023-06-08 18:59:27,757 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] compactions.Compactor(207): Compacting a2f7f62cf1c547af928b7215a4bd4041, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=282, earliestPutTs=1686250767715 2023-06-08 18:59:27,767 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#56 average throughput is 53.02 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:59:27,783 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/4e945f1790844315b63bf298fc1cd2c5 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/4e945f1790844315b63bf298fc1cd2c5 2023-06-08 18:59:27,788 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into 4e945f1790844315b63bf298fc1cd2c5(size=169.1 K), total size for store is 169.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:59:27,789 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:27,789 INFO [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250767755; duration=0sec 2023-06-08 18:59:27,789 DEBUG [RS:0;jenkins-hbase17:43115-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:29,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:29,748 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-08 18:59:29,762 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/40f0d16f4db341ae829da8a5503ac2b9 2023-06-08 18:59:29,768 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/40f0d16f4db341ae829da8a5503ac2b9 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/40f0d16f4db341ae829da8a5503ac2b9 2023-06-08 18:59:29,775 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/40f0d16f4db341ae829da8a5503ac2b9, entries=9, sequenceid=295, filesize=14.2 K 2023-06-08 18:59:29,776 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=16.81 KB/17216 for a8913070c8a542b8076a02b10e0081b5 in 28ms, sequenceid=295, compaction requested=false 2023-06-08 18:59:29,777 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:29,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:29,777 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-06-08 18:59:29,790 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 18:59:29,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] ipc.CallRunner(144): callId: 273 service: ClientService methodName: Mutate size: 1.2 K connection: 136.243.18.41:35630 deadline: 1686250779790, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a8913070c8a542b8076a02b10e0081b5, server=jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:29,793 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=315 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/02ea735fffa34f6c86f8284d2d925d4c 2023-06-08 18:59:29,798 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/02ea735fffa34f6c86f8284d2d925d4c as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/02ea735fffa34f6c86f8284d2d925d4c 2023-06-08 18:59:29,802 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/02ea735fffa34f6c86f8284d2d925d4c, entries=17, sequenceid=315, filesize=22.7 K 2023-06-08 18:59:29,803 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=12.61 KB/12912 for a8913070c8a542b8076a02b10e0081b5 in 26ms, sequenceid=315, compaction requested=true 2023-06-08 18:59:29,803 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:29,803 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 18:59:29,803 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 18:59:29,804 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 210929 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 18:59:29,804 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1912): a8913070c8a542b8076a02b10e0081b5/info is initiating minor compaction (all files) 2023-06-08 18:59:29,804 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegion(2259): Starting compaction of a8913070c8a542b8076a02b10e0081b5/info in TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:29,804 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/4e945f1790844315b63bf298fc1cd2c5, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/40f0d16f4db341ae829da8a5503ac2b9, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/02ea735fffa34f6c86f8284d2d925d4c] into tmpdir=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp, totalSize=206.0 K 2023-06-08 18:59:29,805 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting 4e945f1790844315b63bf298fc1cd2c5, keycount=155, bloomtype=ROW, size=169.1 K, encoding=NONE, compression=NONE, seqNum=282, earliestPutTs=1686250717233 2023-06-08 18:59:29,805 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting 40f0d16f4db341ae829da8a5503ac2b9, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1686250767736 2023-06-08 18:59:29,806 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] compactions.Compactor(207): Compacting 02ea735fffa34f6c86f8284d2d925d4c, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=315, earliestPutTs=1686250769750 2023-06-08 18:59:29,818 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] throttle.PressureAwareThroughputController(145): a8913070c8a542b8076a02b10e0081b5#info#compaction#59 average throughput is 92.87 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 18:59:29,830 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/fe1a336d029144a6a301db05349ae09b as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/fe1a336d029144a6a301db05349ae09b 2023-06-08 18:59:29,834 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a8913070c8a542b8076a02b10e0081b5/info of a8913070c8a542b8076a02b10e0081b5 into fe1a336d029144a6a301db05349ae09b(size=196.6 K), total size for store is 196.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 18:59:29,835 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:29,835 INFO [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5., storeName=a8913070c8a542b8076a02b10e0081b5/info, priority=13, startTime=1686250769803; duration=0sec 2023-06-08 18:59:29,835 DEBUG [RS:0;jenkins-hbase17:43115-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 18:59:39,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43115] regionserver.HRegion(9158): Flush requested on a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:39,812 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=13.66 KB heapSize=14.88 KB 2023-06-08 18:59:39,824 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=13.66 KB at sequenceid=332 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/6c0907bb76e244e7a51cb122a62c1b84 2023-06-08 18:59:39,830 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/6c0907bb76e244e7a51cb122a62c1b84 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6c0907bb76e244e7a51cb122a62c1b84 2023-06-08 18:59:39,837 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6c0907bb76e244e7a51cb122a62c1b84, entries=13, sequenceid=332, filesize=18.5 K 2023-06-08 18:59:39,838 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~13.66 KB/13988, heapSize ~14.86 KB/15216, currentSize=1.05 KB/1076 for a8913070c8a542b8076a02b10e0081b5 in 26ms, sequenceid=332, compaction requested=false 2023-06-08 18:59:39,838 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:41,814 INFO [Listener at localhost.localdomain/41149] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-08 18:59:41,844 INFO [Listener at localhost.localdomain/41149] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250704449 with entries=316, filesize=309.16 KB; new WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250781815 2023-06-08 18:59:41,844 DEBUG [Listener at localhost.localdomain/41149] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40119,DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f,DISK], DatanodeInfoWithStorage[127.0.0.1:40371,DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd,DISK]] 2023-06-08 18:59:41,845 DEBUG [Listener at localhost.localdomain/41149] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250704449 is not closed yet, will try archiving it next time 2023-06-08 18:59:41,853 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegion(2446): Flush status journal for c873d26ccca20e5c8b0cb6b968f48772: 2023-06-08 18:59:41,853 INFO [Listener at localhost.localdomain/41149] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-08 18:59:41,861 INFO [Listener at localhost.localdomain/41149] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/.tmp/info/f74bc13e598e423db8909137185f152c 2023-06-08 18:59:41,866 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/.tmp/info/f74bc13e598e423db8909137185f152c as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info/f74bc13e598e423db8909137185f152c 2023-06-08 18:59:41,870 INFO [Listener at localhost.localdomain/41149] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/info/f74bc13e598e423db8909137185f152c, entries=16, sequenceid=24, filesize=7.0 K 2023-06-08 18:59:41,871 INFO [Listener at localhost.localdomain/41149] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 18ms, sequenceid=24, compaction requested=false 2023-06-08 18:59:41,871 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-08 18:59:41,872 INFO [Listener at localhost.localdomain/41149] regionserver.HRegion(2745): Flushing 281d7f0972be0f385e77be99bf4769cd 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 18:59:41,884 INFO [Listener at localhost.localdomain/41149] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/.tmp/info/d5eb6f5b4bd1419ca8743685fe9c0715 2023-06-08 18:59:41,890 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/.tmp/info/d5eb6f5b4bd1419ca8743685fe9c0715 as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/info/d5eb6f5b4bd1419ca8743685fe9c0715 2023-06-08 18:59:41,896 INFO [Listener at localhost.localdomain/41149] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/info/d5eb6f5b4bd1419ca8743685fe9c0715, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 18:59:41,898 INFO [Listener at localhost.localdomain/41149] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 281d7f0972be0f385e77be99bf4769cd in 26ms, sequenceid=6, compaction requested=false 2023-06-08 18:59:41,899 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegion(2446): Flush status journal for 281d7f0972be0f385e77be99bf4769cd: 2023-06-08 18:59:41,899 INFO [Listener at localhost.localdomain/41149] regionserver.HRegion(2745): Flushing a8913070c8a542b8076a02b10e0081b5 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 18:59:41,910 INFO [Listener at localhost.localdomain/41149] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=336 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/8c687f3501674844b6fdfe09bdc8737c 2023-06-08 18:59:41,922 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/.tmp/info/8c687f3501674844b6fdfe09bdc8737c as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8c687f3501674844b6fdfe09bdc8737c 2023-06-08 18:59:41,929 INFO [Listener at localhost.localdomain/41149] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8c687f3501674844b6fdfe09bdc8737c, entries=1, sequenceid=336, filesize=5.8 K 2023-06-08 18:59:41,932 INFO [Listener at localhost.localdomain/41149] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a8913070c8a542b8076a02b10e0081b5 in 33ms, sequenceid=336, compaction requested=true 2023-06-08 18:59:41,932 DEBUG [Listener at localhost.localdomain/41149] regionserver.HRegion(2446): Flush status journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:41,954 INFO [Listener at localhost.localdomain/41149] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250781815 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250781932 2023-06-08 18:59:41,955 DEBUG [Listener at localhost.localdomain/41149] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40371,DS-5737ba8a-19a1-4285-82ac-65a2e3fabefd,DISK], DatanodeInfoWithStorage[127.0.0.1:40119,DS-76108eb8-f84a-46fe-bd7e-9bd153114a8f,DISK]] 2023-06-08 18:59:41,955 DEBUG [Listener at localhost.localdomain/41149] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250781815 is not closed yet, will try archiving it next time 2023-06-08 18:59:41,955 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250704449 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/oldWALs/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250704449 2023-06-08 18:59:41,957 INFO [Listener at localhost.localdomain/41149] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-08 18:59:41,959 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250781815 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/oldWALs/jenkins-hbase17.apache.org%2C43115%2C1686250704061.1686250781815 2023-06-08 18:59:42,057 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 18:59:42,057 INFO [Listener at localhost.localdomain/41149] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 18:59:42,057 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ed2e9bd to 127.0.0.1:58592 2023-06-08 18:59:42,057 DEBUG [Listener at localhost.localdomain/41149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:42,057 DEBUG [Listener at localhost.localdomain/41149] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 18:59:42,057 DEBUG [Listener at localhost.localdomain/41149] util.JVMClusterUtil(257): Found active master hash=1566151196, stopped=false 2023-06-08 18:59:42,057 INFO [Listener at localhost.localdomain/41149] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:59:42,059 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:59:42,059 INFO [Listener at localhost.localdomain/41149] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 18:59:42,059 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:42,059 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:59:42,059 DEBUG [Listener at localhost.localdomain/41149] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0e9ba322 to 127.0.0.1:58592 2023-06-08 18:59:42,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:59:42,060 DEBUG [Listener at localhost.localdomain/41149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:42,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:59:42,060 INFO [Listener at localhost.localdomain/41149] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,43115,1686250704061' ***** 2023-06-08 18:59:42,060 INFO [Listener at localhost.localdomain/41149] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:59:42,060 INFO [RS:0;jenkins-hbase17:43115] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:59:42,060 INFO [RS:0;jenkins-hbase17:43115] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:59:42,060 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:59:42,060 INFO [RS:0;jenkins-hbase17:43115] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(3303): Received CLOSE for c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(3303): Received CLOSE for 281d7f0972be0f385e77be99bf4769cd 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(3303): Received CLOSE for a8913070c8a542b8076a02b10e0081b5 2023-06-08 18:59:42,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing c873d26ccca20e5c8b0cb6b968f48772, disabling compactions & flushes 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:42,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:59:42,061 DEBUG [RS:0;jenkins-hbase17:43115] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x20b478f6 to 127.0.0.1:58592 2023-06-08 18:59:42,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:59:42,061 DEBUG [RS:0;jenkins-hbase17:43115] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:42,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. after waiting 0 ms 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:59:42,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:59:42,061 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-08 18:59:42,061 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1478): Online Regions={c873d26ccca20e5c8b0cb6b968f48772=TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772., 1588230740=hbase:meta,,1.1588230740, 281d7f0972be0f385e77be99bf4769cd=hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd., a8913070c8a542b8076a02b10e0081b5=TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.} 2023-06-08 18:59:42,061 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:59:42,062 DEBUG [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1504): Waiting on 1588230740, 281d7f0972be0f385e77be99bf4769cd, a8913070c8a542b8076a02b10e0081b5, c873d26ccca20e5c8b0cb6b968f48772 2023-06-08 18:59:42,062 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:59:42,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:59:42,062 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a->hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e-bottom] to archive 2023-06-08 18:59:42,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:59:42,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:59:42,064 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 18:59:42,066 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a 2023-06-08 18:59:42,077 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-08 18:59:42,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/c873d26ccca20e5c8b0cb6b968f48772/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=88 2023-06-08 18:59:42,077 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 18:59:42,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:59:42,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:59:42,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 18:59:42,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:59:42,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for c873d26ccca20e5c8b0cb6b968f48772: 2023-06-08 18:59:42,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1686250727370.c873d26ccca20e5c8b0cb6b968f48772. 2023-06-08 18:59:42,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 281d7f0972be0f385e77be99bf4769cd, disabling compactions & flushes 2023-06-08 18:59:42,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:59:42,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:59:42,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. after waiting 0 ms 2023-06-08 18:59:42,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:59:42,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/hbase/namespace/281d7f0972be0f385e77be99bf4769cd/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 18:59:42,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:59:42,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 281d7f0972be0f385e77be99bf4769cd: 2023-06-08 18:59:42,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686250704598.281d7f0972be0f385e77be99bf4769cd. 2023-06-08 18:59:42,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing a8913070c8a542b8076a02b10e0081b5, disabling compactions & flushes 2023-06-08 18:59:42,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:42,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:42,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. after waiting 0 ms 2023-06-08 18:59:42,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:42,099 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a->hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/7bb70d321c62b16faf748011879faa7a/info/76ee1f39b051458c9be973f5d17f864e-top, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6fcc58b3268d4034854f790fc52e2d32, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/2dff9f40227242cd87d1b925f45b998f, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/b30b8becffac4047bb8a22499004e981, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d7a72fb1b5db48cb9e9064ad5a835928, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8af9347ddfda4a9bb950e3b94205456b, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/5ad19e1904a04bd9964cff8f9bb389ac, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a668ceb53be54751b6e8b058154bb78e, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6df644da552c47b38cfd97703e665b4d, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/69a8b5f5ae294d739cd23391cd24bc37, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/55fe4072439741c8803b51ea68ea7b63, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d09965e9be304906a689bd19a5f0e21a, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/19cfbb8f85a94ce9a672d835565e3c1e, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/65fbd20a600a46bf9630f028b5854a32, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/de4e171948be47e998f8978847ba6c7c, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a920d16ee211413591f484e047faa85a, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/3abb57d1ead940f0b0d9174f59288046, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/726b220e0a0041b38412dd5b685f9b81, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/4e945f1790844315b63bf298fc1cd2c5, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a2f7f62cf1c547af928b7215a4bd4041, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/40f0d16f4db341ae829da8a5503ac2b9, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/02ea735fffa34f6c86f8284d2d925d4c] to archive 2023-06-08 18:59:42,100 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 18:59:42,101 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/76ee1f39b051458c9be973f5d17f864e.7bb70d321c62b16faf748011879faa7a 2023-06-08 18:59:42,103 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6fcc58b3268d4034854f790fc52e2d32 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6fcc58b3268d4034854f790fc52e2d32 2023-06-08 18:59:42,104 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/TestLogRolling-testLogRolling=7bb70d321c62b16faf748011879faa7a-6d096f73787d47c18f45a9705173f797 2023-06-08 18:59:42,105 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/2dff9f40227242cd87d1b925f45b998f to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/2dff9f40227242cd87d1b925f45b998f 2023-06-08 18:59:42,106 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/b30b8becffac4047bb8a22499004e981 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/b30b8becffac4047bb8a22499004e981 2023-06-08 18:59:42,107 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d7a72fb1b5db48cb9e9064ad5a835928 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d7a72fb1b5db48cb9e9064ad5a835928 2023-06-08 18:59:42,109 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8af9347ddfda4a9bb950e3b94205456b to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/8af9347ddfda4a9bb950e3b94205456b 2023-06-08 18:59:42,110 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/5ad19e1904a04bd9964cff8f9bb389ac to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/5ad19e1904a04bd9964cff8f9bb389ac 2023-06-08 18:59:42,111 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a668ceb53be54751b6e8b058154bb78e to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a668ceb53be54751b6e8b058154bb78e 2023-06-08 18:59:42,112 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6df644da552c47b38cfd97703e665b4d to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/6df644da552c47b38cfd97703e665b4d 2023-06-08 18:59:42,113 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/69a8b5f5ae294d739cd23391cd24bc37 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/69a8b5f5ae294d739cd23391cd24bc37 2023-06-08 18:59:42,114 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/55fe4072439741c8803b51ea68ea7b63 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/55fe4072439741c8803b51ea68ea7b63 2023-06-08 18:59:42,115 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d09965e9be304906a689bd19a5f0e21a to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/d09965e9be304906a689bd19a5f0e21a 2023-06-08 18:59:42,116 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/19cfbb8f85a94ce9a672d835565e3c1e to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/19cfbb8f85a94ce9a672d835565e3c1e 2023-06-08 18:59:42,118 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/65fbd20a600a46bf9630f028b5854a32 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/65fbd20a600a46bf9630f028b5854a32 2023-06-08 18:59:42,118 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/de4e171948be47e998f8978847ba6c7c to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/de4e171948be47e998f8978847ba6c7c 2023-06-08 18:59:42,119 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a920d16ee211413591f484e047faa85a to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a920d16ee211413591f484e047faa85a 2023-06-08 18:59:42,121 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/3abb57d1ead940f0b0d9174f59288046 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/3abb57d1ead940f0b0d9174f59288046 2023-06-08 18:59:42,122 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/726b220e0a0041b38412dd5b685f9b81 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/726b220e0a0041b38412dd5b685f9b81 2023-06-08 18:59:42,123 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/4e945f1790844315b63bf298fc1cd2c5 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/4e945f1790844315b63bf298fc1cd2c5 2023-06-08 18:59:42,124 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a2f7f62cf1c547af928b7215a4bd4041 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/a2f7f62cf1c547af928b7215a4bd4041 2023-06-08 18:59:42,126 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/40f0d16f4db341ae829da8a5503ac2b9 to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/40f0d16f4db341ae829da8a5503ac2b9 2023-06-08 18:59:42,128 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/02ea735fffa34f6c86f8284d2d925d4c to hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/archive/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/info/02ea735fffa34f6c86f8284d2d925d4c 2023-06-08 18:59:42,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/data/default/TestLogRolling-testLogRolling/a8913070c8a542b8076a02b10e0081b5/recovered.edits/339.seqid, newMaxSeqId=339, maxSeqId=88 2023-06-08 18:59:42,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:42,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for a8913070c8a542b8076a02b10e0081b5: 2023-06-08 18:59:42,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1686250727370.a8913070c8a542b8076a02b10e0081b5. 2023-06-08 18:59:42,263 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43115,1686250704061; all regions closed. 2023-06-08 18:59:42,263 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:42,271 DEBUG [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/oldWALs 2023-06-08 18:59:42,271 INFO [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C43115%2C1686250704061.meta:.meta(num 1686250704554) 2023-06-08 18:59:42,271 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/WALs/jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:42,281 DEBUG [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/oldWALs 2023-06-08 18:59:42,281 INFO [RS:0;jenkins-hbase17:43115] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C43115%2C1686250704061:(num 1686250781932) 2023-06-08 18:59:42,281 DEBUG [RS:0;jenkins-hbase17:43115] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:42,281 INFO [RS:0;jenkins-hbase17:43115] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:59:42,281 INFO [RS:0;jenkins-hbase17:43115] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 18:59:42,281 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:59:42,282 INFO [RS:0;jenkins-hbase17:43115] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43115 2023-06-08 18:59:42,284 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,43115,1686250704061 2023-06-08 18:59:42,284 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:59:42,284 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:59:42,285 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,43115,1686250704061] 2023-06-08 18:59:42,285 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,43115,1686250704061; numProcessing=1 2023-06-08 18:59:42,286 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,43115,1686250704061 already deleted, retry=false 2023-06-08 18:59:42,286 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,43115,1686250704061 expired; onlineServers=0 2023-06-08 18:59:42,286 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,36347,1686250704023' ***** 2023-06-08 18:59:42,286 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 18:59:42,287 DEBUG [M:0;jenkins-hbase17:36347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59660d6f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:59:42,287 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:59:42,287 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,36347,1686250704023; all regions closed. 2023-06-08 18:59:42,287 DEBUG [M:0;jenkins-hbase17:36347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:42,287 DEBUG [M:0;jenkins-hbase17:36347] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 18:59:42,287 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 18:59:42,287 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250704197] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250704197,5,FailOnTimeoutGroup] 2023-06-08 18:59:42,287 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250704197] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250704197,5,FailOnTimeoutGroup] 2023-06-08 18:59:42,287 DEBUG [M:0;jenkins-hbase17:36347] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 18:59:42,288 INFO [M:0;jenkins-hbase17:36347] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 18:59:42,288 INFO [M:0;jenkins-hbase17:36347] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 18:59:42,289 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 18:59:42,289 INFO [M:0;jenkins-hbase17:36347] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-06-08 18:59:42,289 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:42,289 DEBUG [M:0;jenkins-hbase17:36347] master.HMaster(1512): Stopping service threads 2023-06-08 18:59:42,289 INFO [M:0;jenkins-hbase17:36347] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 18:59:42,289 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:59:42,289 ERROR [M:0;jenkins-hbase17:36347] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 18:59:42,289 INFO [M:0;jenkins-hbase17:36347] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 18:59:42,290 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 18:59:42,290 DEBUG [M:0;jenkins-hbase17:36347] zookeeper.ZKUtil(398): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 18:59:42,290 WARN [M:0;jenkins-hbase17:36347] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 18:59:42,290 INFO [M:0;jenkins-hbase17:36347] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 18:59:42,290 INFO [M:0;jenkins-hbase17:36347] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 18:59:42,290 DEBUG [M:0;jenkins-hbase17:36347] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:59:42,291 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:42,291 DEBUG [M:0;jenkins-hbase17:36347] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:42,291 DEBUG [M:0;jenkins-hbase17:36347] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:59:42,291 DEBUG [M:0;jenkins-hbase17:36347] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:42,291 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-06-08 18:59:42,315 INFO [M:0;jenkins-hbase17:36347] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5e59e616baf346208f89571614054a5b 2023-06-08 18:59:42,322 INFO [M:0;jenkins-hbase17:36347] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e59e616baf346208f89571614054a5b 2023-06-08 18:59:42,324 DEBUG [M:0;jenkins-hbase17:36347] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5e59e616baf346208f89571614054a5b as hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5e59e616baf346208f89571614054a5b 2023-06-08 18:59:42,330 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:59:42,338 INFO [M:0;jenkins-hbase17:36347] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5e59e616baf346208f89571614054a5b 2023-06-08 18:59:42,338 INFO [M:0;jenkins-hbase17:36347] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36619/user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5e59e616baf346208f89571614054a5b, entries=18, sequenceid=160, filesize=6.9 K 2023-06-08 18:59:42,340 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 49ms, sequenceid=160, compaction requested=false 2023-06-08 18:59:42,343 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:42,343 DEBUG [M:0;jenkins-hbase17:36347] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:59:42,344 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/38a0533e-e101-404b-7d05-4735afaf01fb/MasterData/WALs/jenkins-hbase17.apache.org,36347,1686250704023 2023-06-08 18:59:42,347 INFO [M:0;jenkins-hbase17:36347] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 18:59:42,347 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:59:42,348 INFO [M:0;jenkins-hbase17:36347] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:36347 2023-06-08 18:59:42,349 DEBUG [M:0;jenkins-hbase17:36347] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,36347,1686250704023 already deleted, retry=false 2023-06-08 18:59:42,385 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:42,386 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): regionserver:43115-0x100abcd479d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:42,385 INFO [RS:0;jenkins-hbase17:43115] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43115,1686250704061; zookeeper connection closed. 2023-06-08 18:59:42,386 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@26637378] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@26637378 2023-06-08 18:59:42,386 INFO [Listener at localhost.localdomain/41149] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 18:59:42,486 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:42,486 INFO [M:0;jenkins-hbase17:36347] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,36347,1686250704023; zookeeper connection closed. 2023-06-08 18:59:42,486 DEBUG [Listener at localhost.localdomain/41149-EventThread] zookeeper.ZKWatcher(600): master:36347-0x100abcd479d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:42,488 WARN [Listener at localhost.localdomain/41149] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:59:42,497 INFO [Listener at localhost.localdomain/41149] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:59:42,602 WARN [BP-1613484395-136.243.18.41-1686250703480 heartbeating to localhost.localdomain/127.0.0.1:36619] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:59:42,602 WARN [BP-1613484395-136.243.18.41-1686250703480 heartbeating to localhost.localdomain/127.0.0.1:36619] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1613484395-136.243.18.41-1686250703480 (Datanode Uuid f5a0fb88-cdbe-4e65-be49-721f86041691) service to localhost.localdomain/127.0.0.1:36619 2023-06-08 18:59:42,603 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/dfs/data/data3/current/BP-1613484395-136.243.18.41-1686250703480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:42,604 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/dfs/data/data4/current/BP-1613484395-136.243.18.41-1686250703480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:42,605 WARN [Listener at localhost.localdomain/41149] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:59:42,608 INFO [Listener at localhost.localdomain/41149] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:59:42,712 WARN [BP-1613484395-136.243.18.41-1686250703480 heartbeating to localhost.localdomain/127.0.0.1:36619] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:59:42,712 WARN [BP-1613484395-136.243.18.41-1686250703480 heartbeating to localhost.localdomain/127.0.0.1:36619] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1613484395-136.243.18.41-1686250703480 (Datanode Uuid ded236d7-ecf3-416e-8e7f-2ab01d1a3c97) service to localhost.localdomain/127.0.0.1:36619 2023-06-08 18:59:42,712 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/dfs/data/data1/current/BP-1613484395-136.243.18.41-1686250703480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:42,713 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/cluster_0fd21936-5621-dab6-bd95-bdd5a304d1ad/dfs/data/data2/current/BP-1613484395-136.243.18.41-1686250703480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:42,730 INFO [Listener at localhost.localdomain/41149] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 18:59:42,859 INFO [Listener at localhost.localdomain/41149] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 18:59:42,890 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 18:59:42,902 INFO [Listener at localhost.localdomain/41149] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 97) - Thread LEAK? -, OpenFileDescriptor=532 (was 495) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=229 (was 281), ProcessCount=184 (was 184), AvailableMemoryMB=875 (was 1180) 2023-06-08 18:59:42,912 INFO [Listener at localhost.localdomain/41149] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=532, MaxFileDescriptor=60000, SystemLoadAverage=229, ProcessCount=184, AvailableMemoryMB=875 2023-06-08 18:59:42,912 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 18:59:42,912 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/hadoop.log.dir so I do NOT create it in target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d 2023-06-08 18:59:42,912 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7195edd6-8cbd-9bab-1150-f5d068b033aa/hadoop.tmp.dir so I do NOT create it in target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd, deleteOnExit=true 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/test.cache.data in system properties and HBase conf 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/hadoop.log.dir in system properties and HBase conf 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 18:59:42,913 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 18:59:42,913 DEBUG [Listener at localhost.localdomain/41149] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:59:42,914 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 18:59:42,915 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/nfs.dump.dir in system properties and HBase conf 2023-06-08 18:59:42,915 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/java.io.tmpdir in system properties and HBase conf 2023-06-08 18:59:42,915 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 18:59:42,915 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 18:59:42,915 INFO [Listener at localhost.localdomain/41149] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 18:59:42,917 WARN [Listener at localhost.localdomain/41149] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:59:42,918 WARN [Listener at localhost.localdomain/41149] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:59:42,918 WARN [Listener at localhost.localdomain/41149] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:59:42,947 WARN [Listener at localhost.localdomain/41149] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:59:42,949 INFO [Listener at localhost.localdomain/41149] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:59:42,953 INFO [Listener at localhost.localdomain/41149] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/java.io.tmpdir/Jetty_localhost_localdomain_44631_hdfs____.l705ua/webapp 2023-06-08 18:59:43,025 INFO [Listener at localhost.localdomain/41149] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:44631 2023-06-08 18:59:43,026 WARN [Listener at localhost.localdomain/41149] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 18:59:43,027 WARN [Listener at localhost.localdomain/41149] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 18:59:43,027 WARN [Listener at localhost.localdomain/41149] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 18:59:43,053 WARN [Listener at localhost.localdomain/38497] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:59:43,066 WARN [Listener at localhost.localdomain/38497] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:59:43,068 WARN [Listener at localhost.localdomain/38497] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:59:43,069 INFO [Listener at localhost.localdomain/38497] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:59:43,072 INFO [Listener at localhost.localdomain/38497] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/java.io.tmpdir/Jetty_localhost_41293_datanode____.z8u8ve/webapp 2023-06-08 18:59:43,151 INFO [Listener at localhost.localdomain/38497] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41293 2023-06-08 18:59:43,168 WARN [Listener at localhost.localdomain/37309] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:59:43,180 WARN [Listener at localhost.localdomain/37309] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 18:59:43,183 WARN [Listener at localhost.localdomain/37309] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 18:59:43,184 INFO [Listener at localhost.localdomain/37309] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 18:59:43,189 INFO [Listener at localhost.localdomain/37309] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/java.io.tmpdir/Jetty_localhost_36901_datanode____hknkf2/webapp 2023-06-08 18:59:43,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8a4660e270bdd2a7: Processing first storage report for DS-4e421233-0e58-462d-acba-12b1c1ba5428 from datanode 2e1cdf66-cc5e-4b6f-932e-87c8fd33c218 2023-06-08 18:59:43,236 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8a4660e270bdd2a7: from storage DS-4e421233-0e58-462d-acba-12b1c1ba5428 node DatanodeRegistration(127.0.0.1:40997, datanodeUuid=2e1cdf66-cc5e-4b6f-932e-87c8fd33c218, infoPort=37053, infoSecurePort=0, ipcPort=37309, storageInfo=lv=-57;cid=testClusterID;nsid=299096280;c=1686250782919), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:59:43,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8a4660e270bdd2a7: Processing first storage report for DS-33ed9022-c029-474f-a3f1-22318a9fa7a7 from datanode 2e1cdf66-cc5e-4b6f-932e-87c8fd33c218 2023-06-08 18:59:43,236 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8a4660e270bdd2a7: from storage DS-33ed9022-c029-474f-a3f1-22318a9fa7a7 node DatanodeRegistration(127.0.0.1:40997, datanodeUuid=2e1cdf66-cc5e-4b6f-932e-87c8fd33c218, infoPort=37053, infoSecurePort=0, ipcPort=37309, storageInfo=lv=-57;cid=testClusterID;nsid=299096280;c=1686250782919), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:59:43,267 INFO [Listener at localhost.localdomain/37309] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36901 2023-06-08 18:59:43,272 WARN [Listener at localhost.localdomain/40939] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 18:59:43,327 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3e8b4224a4da0fc9: Processing first storage report for DS-b851dc84-617e-42be-88f7-92c5ad966944 from datanode 25ade0d7-7a87-49d1-9b3b-fbfeeb1a0865 2023-06-08 18:59:43,327 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3e8b4224a4da0fc9: from storage DS-b851dc84-617e-42be-88f7-92c5ad966944 node DatanodeRegistration(127.0.0.1:34283, datanodeUuid=25ade0d7-7a87-49d1-9b3b-fbfeeb1a0865, infoPort=39605, infoSecurePort=0, ipcPort=40939, storageInfo=lv=-57;cid=testClusterID;nsid=299096280;c=1686250782919), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:59:43,327 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3e8b4224a4da0fc9: Processing first storage report for DS-409238ff-c9c6-40ff-887c-c02c2740882e from datanode 25ade0d7-7a87-49d1-9b3b-fbfeeb1a0865 2023-06-08 18:59:43,327 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3e8b4224a4da0fc9: from storage DS-409238ff-c9c6-40ff-887c-c02c2740882e node DatanodeRegistration(127.0.0.1:34283, datanodeUuid=25ade0d7-7a87-49d1-9b3b-fbfeeb1a0865, infoPort=39605, infoSecurePort=0, ipcPort=40939, storageInfo=lv=-57;cid=testClusterID;nsid=299096280;c=1686250782919), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 18:59:43,384 DEBUG [Listener at localhost.localdomain/40939] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d 2023-06-08 18:59:43,387 INFO [Listener at localhost.localdomain/40939] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/zookeeper_0, clientPort=61260, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 18:59:43,389 INFO [Listener at localhost.localdomain/40939] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61260 2023-06-08 18:59:43,390 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,391 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,404 INFO [Listener at localhost.localdomain/40939] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4 with version=8 2023-06-08 18:59:43,405 INFO [Listener at localhost.localdomain/40939] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:44823/user/jenkins/test-data/db7b3f1f-4575-0fb7-f3f7-9acbca061184/hbase-staging 2023-06-08 18:59:43,406 INFO [Listener at localhost.localdomain/40939] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:59:43,407 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:59:43,407 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:59:43,407 INFO [Listener at localhost.localdomain/40939] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:59:43,407 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:59:43,407 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:59:43,407 INFO [Listener at localhost.localdomain/40939] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:59:43,409 INFO [Listener at localhost.localdomain/40939] ipc.NettyRpcServer(120): Bind to /136.243.18.41:43573 2023-06-08 18:59:43,409 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,410 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,411 INFO [Listener at localhost.localdomain/40939] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43573 connecting to ZooKeeper ensemble=127.0.0.1:61260 2023-06-08 18:59:43,416 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:435730x0, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:59:43,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43573-0x100abce7db90000 connected 2023-06-08 18:59:43,426 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:59:43,427 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:59:43,427 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:59:43,427 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43573 2023-06-08 18:59:43,428 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43573 2023-06-08 18:59:43,428 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43573 2023-06-08 18:59:43,428 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43573 2023-06-08 18:59:43,428 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43573 2023-06-08 18:59:43,428 INFO [Listener at localhost.localdomain/40939] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4, hbase.cluster.distributed=false 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 18:59:43,443 INFO [Listener at localhost.localdomain/40939] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 18:59:43,445 INFO [Listener at localhost.localdomain/40939] ipc.NettyRpcServer(120): Bind to /136.243.18.41:46357 2023-06-08 18:59:43,445 INFO [Listener at localhost.localdomain/40939] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 18:59:43,449 DEBUG [Listener at localhost.localdomain/40939] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 18:59:43,449 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,450 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,451 INFO [Listener at localhost.localdomain/40939] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46357 connecting to ZooKeeper ensemble=127.0.0.1:61260 2023-06-08 18:59:43,454 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:463570x0, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 18:59:43,455 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ZKUtil(164): regionserver:463570x0, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:59:43,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46357-0x100abce7db90001 connected 2023-06-08 18:59:43,456 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ZKUtil(164): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:59:43,456 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ZKUtil(164): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 18:59:43,457 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46357 2023-06-08 18:59:43,458 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46357 2023-06-08 18:59:43,458 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46357 2023-06-08 18:59:43,458 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46357 2023-06-08 18:59:43,458 DEBUG [Listener at localhost.localdomain/40939] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46357 2023-06-08 18:59:43,459 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,461 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:59:43,461 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,462 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:59:43,462 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 18:59:43,462 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,462 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:59:43,464 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 18:59:43,464 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,43573,1686250783406 from backup master directory 2023-06-08 18:59:43,465 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,465 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 18:59:43,465 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:59:43,465 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,474 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/hbase.id with ID: 0cc3b5ad-0528-4538-824f-b7385c2dd18a 2023-06-08 18:59:43,485 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:43,487 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,497 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6c34a6f8 to 127.0.0.1:61260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:59:43,500 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c37d0f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:59:43,500 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 18:59:43,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 18:59:43,501 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:59:43,503 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store-tmp 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:59:43,509 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:43,509 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:59:43,509 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/WALs/jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,511 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C43573%2C1686250783406, suffix=, logDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/WALs/jenkins-hbase17.apache.org,43573,1686250783406, archiveDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/oldWALs, maxLogs=10 2023-06-08 18:59:43,517 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/WALs/jenkins-hbase17.apache.org,43573,1686250783406/jenkins-hbase17.apache.org%2C43573%2C1686250783406.1686250783512 2023-06-08 18:59:43,517 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34283,DS-b851dc84-617e-42be-88f7-92c5ad966944,DISK], DatanodeInfoWithStorage[127.0.0.1:40997,DS-4e421233-0e58-462d-acba-12b1c1ba5428,DISK]] 2023-06-08 18:59:43,517 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:59:43,517 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:59:43,517 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:59:43,517 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:59:43,519 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:59:43,520 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 18:59:43,520 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 18:59:43,521 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,521 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:59:43,522 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:59:43,524 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 18:59:43,525 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:59:43,525 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=769784, jitterRate=-0.021169528365135193}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:59:43,525 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:59:43,526 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 18:59:43,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 18:59:43,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 18:59:43,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 18:59:43,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 18:59:43,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 18:59:43,527 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 18:59:43,528 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 18:59:43,528 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 18:59:43,538 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 18:59:43,538 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 18:59:43,539 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 18:59:43,539 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 18:59:43,539 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 18:59:43,540 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,541 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 18:59:43,541 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 18:59:43,542 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 18:59:43,542 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:59:43,542 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 18:59:43,542 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,543 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,43573,1686250783406, sessionid=0x100abce7db90000, setting cluster-up flag (Was=false) 2023-06-08 18:59:43,546 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,548 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 18:59:43,549 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,550 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,553 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 18:59:43,554 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:43,555 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/.hbase-snapshot/.tmp 2023-06-08 18:59:43,558 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 18:59:43,559 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:59:43,559 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:59:43,559 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:59:43,559 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-06-08 18:59:43,560 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-06-08 18:59:43,560 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,560 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:59:43,560 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,560 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(951): ClusterId : 0cc3b5ad-0528-4538-824f-b7385c2dd18a 2023-06-08 18:59:43,561 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 18:59:43,562 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686250813561 2023-06-08 18:59:43,562 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 18:59:43,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 18:59:43,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 18:59:43,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 18:59:43,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 18:59:43,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 18:59:43,563 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,563 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:59:43,564 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 18:59:43,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 18:59:43,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 18:59:43,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 18:59:43,565 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 18:59:43,565 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 18:59:43,565 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 18:59:43,565 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 18:59:43,565 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250783565,5,FailOnTimeoutGroup] 2023-06-08 18:59:43,566 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250783566,5,FailOnTimeoutGroup] 2023-06-08 18:59:43,566 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,566 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 18:59:43,566 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,566 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,566 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:59:43,567 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 18:59:43,568 DEBUG [RS:0;jenkins-hbase17:46357] zookeeper.ReadOnlyZKClient(139): Connect 0x79729eae to 127.0.0.1:61260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:59:43,573 DEBUG [RS:0;jenkins-hbase17:46357] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77b57ff0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:59:43,573 DEBUG [RS:0;jenkins-hbase17:46357] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@738af5de, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:59:43,577 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:59:43,577 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 18:59:43,577 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4 2023-06-08 18:59:43,582 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:46357 2023-06-08 18:59:43,583 INFO [RS:0;jenkins-hbase17:46357] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 18:59:43,583 INFO [RS:0;jenkins-hbase17:46357] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 18:59:43,583 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 18:59:43,583 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase17.apache.org,43573,1686250783406 with isa=jenkins-hbase17.apache.org/136.243.18.41:46357, startcode=1686250783442 2023-06-08 18:59:43,583 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:59:43,583 DEBUG [RS:0;jenkins-hbase17:46357] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 18:59:43,585 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:59:43,587 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:59677, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 18:59:43,587 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/info 2023-06-08 18:59:43,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43573] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,588 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:59:43,588 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4 2023-06-08 18:59:43,588 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38497 2023-06-08 18:59:43,588 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 18:59:43,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,589 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:59:43,590 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:59:43,590 DEBUG [RS:0;jenkins-hbase17:46357] zookeeper.ZKUtil(162): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,590 WARN [RS:0;jenkins-hbase17:46357] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 18:59:43,591 INFO [RS:0;jenkins-hbase17:46357] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:59:43,591 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,591 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,46357,1686250783442] 2023-06-08 18:59:43,591 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:59:43,591 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:59:43,593 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,593 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:59:43,595 DEBUG [RS:0;jenkins-hbase17:46357] zookeeper.ZKUtil(162): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,595 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/table 2023-06-08 18:59:43,595 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:59:43,595 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 18:59:43,596 INFO [RS:0;jenkins-hbase17:46357] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 18:59:43,596 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,596 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740 2023-06-08 18:59:43,596 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740 2023-06-08 18:59:43,597 INFO [RS:0;jenkins-hbase17:46357] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 18:59:43,597 INFO [RS:0;jenkins-hbase17:46357] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 18:59:43,597 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,597 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 18:59:43,598 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,599 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-06-08 18:59:43,599 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,600 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,600 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,600 DEBUG [RS:0;jenkins-hbase17:46357] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-06-08 18:59:43,600 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:59:43,601 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,601 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,601 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,602 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:59:43,603 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=777164, jitterRate=-0.011785581707954407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:59:43,603 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:59:43,603 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:59:43,603 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:59:43,603 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:59:43,603 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:59:43,603 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:59:43,604 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:59:43,604 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:59:43,605 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 18:59:43,605 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 18:59:43,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 18:59:43,609 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 18:59:43,611 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 18:59:43,615 INFO [RS:0;jenkins-hbase17:46357] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 18:59:43,615 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46357,1686250783442-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,624 INFO [RS:0;jenkins-hbase17:46357] regionserver.Replication(203): jenkins-hbase17.apache.org,46357,1686250783442 started 2023-06-08 18:59:43,624 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,46357,1686250783442, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:46357, sessionid=0x100abce7db90001 2023-06-08 18:59:43,624 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 18:59:43,624 DEBUG [RS:0;jenkins-hbase17:46357] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,624 DEBUG [RS:0;jenkins-hbase17:46357] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46357,1686250783442' 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46357,1686250783442' 2023-06-08 18:59:43,625 DEBUG [RS:0;jenkins-hbase17:46357] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 18:59:43,626 DEBUG [RS:0;jenkins-hbase17:46357] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 18:59:43,626 DEBUG [RS:0;jenkins-hbase17:46357] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 18:59:43,626 INFO [RS:0;jenkins-hbase17:46357] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 18:59:43,626 INFO [RS:0;jenkins-hbase17:46357] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 18:59:43,728 INFO [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46357%2C1686250783442, suffix=, logDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442, archiveDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs, maxLogs=32 2023-06-08 18:59:43,737 INFO [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442/jenkins-hbase17.apache.org%2C46357%2C1686250783442.1686250783728 2023-06-08 18:59:43,737 DEBUG [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40997,DS-4e421233-0e58-462d-acba-12b1c1ba5428,DISK], DatanodeInfoWithStorage[127.0.0.1:34283,DS-b851dc84-617e-42be-88f7-92c5ad966944,DISK]] 2023-06-08 18:59:43,761 DEBUG [jenkins-hbase17:43573] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 18:59:43,762 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,46357,1686250783442, state=OPENING 2023-06-08 18:59:43,763 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 18:59:43,764 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:43,765 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:59:43,765 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,46357,1686250783442}] 2023-06-08 18:59:43,920 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:43,921 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 18:59:43,925 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57634, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 18:59:43,932 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 18:59:43,932 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:59:43,935 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46357%2C1686250783442.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442, archiveDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs, maxLogs=32 2023-06-08 18:59:43,943 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442/jenkins-hbase17.apache.org%2C46357%2C1686250783442.meta.1686250783936.meta 2023-06-08 18:59:43,943 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34283,DS-b851dc84-617e-42be-88f7-92c5ad966944,DISK], DatanodeInfoWithStorage[127.0.0.1:40997,DS-4e421233-0e58-462d-acba-12b1c1ba5428,DISK]] 2023-06-08 18:59:43,943 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:59:43,943 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 18:59:43,943 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 18:59:43,944 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 18:59:43,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 18:59:43,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:59:43,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 18:59:43,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 18:59:43,946 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 18:59:43,947 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/info 2023-06-08 18:59:43,947 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/info 2023-06-08 18:59:43,948 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 18:59:43,948 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,949 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 18:59:43,950 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:59:43,950 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/rep_barrier 2023-06-08 18:59:43,950 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 18:59:43,951 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,951 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 18:59:43,952 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/table 2023-06-08 18:59:43,952 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/table 2023-06-08 18:59:43,953 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 18:59:43,953 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:43,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740 2023-06-08 18:59:43,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740 2023-06-08 18:59:43,956 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 18:59:43,959 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 18:59:43,960 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=714305, jitterRate=-0.09171457588672638}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 18:59:43,960 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 18:59:43,961 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686250783920 2023-06-08 18:59:43,965 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 18:59:43,966 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 18:59:43,966 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,46357,1686250783442, state=OPEN 2023-06-08 18:59:43,967 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 18:59:43,968 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 18:59:43,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 18:59:43,969 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,46357,1686250783442 in 202 msec 2023-06-08 18:59:43,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 18:59:43,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 364 msec 2023-06-08 18:59:43,973 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 415 msec 2023-06-08 18:59:43,973 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686250783973, completionTime=-1 2023-06-08 18:59:43,973 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 18:59:43,973 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 18:59:43,978 DEBUG [hconnection-0x5764dedd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:59:43,981 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:59:43,982 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 18:59:43,982 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686250843982 2023-06-08 18:59:43,982 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686250903982 2023-06-08 18:59:43,982 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 9 msec 2023-06-08 18:59:43,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43573,1686250783406-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43573,1686250783406-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43573,1686250783406-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:43573, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 18:59:43,990 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 18:59:43,991 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 18:59:43,992 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 18:59:43,992 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 18:59:43,994 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 18:59:43,994 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 18:59:43,997 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/.tmp/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:43,997 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/.tmp/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe empty. 2023-06-08 18:59:43,998 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/.tmp/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:43,998 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 18:59:44,008 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 18:59:44,009 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4e00b49c46c6c94d9a06fc3faa8bfafe, NAME => 'hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/.tmp 2023-06-08 18:59:44,018 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:59:44,018 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4e00b49c46c6c94d9a06fc3faa8bfafe, disabling compactions & flushes 2023-06-08 18:59:44,018 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,018 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,019 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. after waiting 0 ms 2023-06-08 18:59:44,019 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,019 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,019 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4e00b49c46c6c94d9a06fc3faa8bfafe: 2023-06-08 18:59:44,020 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 18:59:44,021 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250784021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686250784021"}]},"ts":"1686250784021"} 2023-06-08 18:59:44,023 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 18:59:44,024 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 18:59:44,024 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250784024"}]},"ts":"1686250784024"} 2023-06-08 18:59:44,025 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 18:59:44,030 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4e00b49c46c6c94d9a06fc3faa8bfafe, ASSIGN}] 2023-06-08 18:59:44,032 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4e00b49c46c6c94d9a06fc3faa8bfafe, ASSIGN 2023-06-08 18:59:44,032 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4e00b49c46c6c94d9a06fc3faa8bfafe, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46357,1686250783442; forceNewPlan=false, retain=false 2023-06-08 18:59:44,183 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4e00b49c46c6c94d9a06fc3faa8bfafe, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:44,184 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250784183"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686250784183"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686250784183"}]},"ts":"1686250784183"} 2023-06-08 18:59:44,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 4e00b49c46c6c94d9a06fc3faa8bfafe, server=jenkins-hbase17.apache.org,46357,1686250783442}] 2023-06-08 18:59:44,340 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4e00b49c46c6c94d9a06fc3faa8bfafe, NAME => 'hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.', STARTKEY => '', ENDKEY => ''} 2023-06-08 18:59:44,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 18:59:44,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,341 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,342 INFO [StoreOpener-4e00b49c46c6c94d9a06fc3faa8bfafe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,343 DEBUG [StoreOpener-4e00b49c46c6c94d9a06fc3faa8bfafe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/info 2023-06-08 18:59:44,343 DEBUG [StoreOpener-4e00b49c46c6c94d9a06fc3faa8bfafe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/info 2023-06-08 18:59:44,344 INFO [StoreOpener-4e00b49c46c6c94d9a06fc3faa8bfafe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4e00b49c46c6c94d9a06fc3faa8bfafe columnFamilyName info 2023-06-08 18:59:44,344 INFO [StoreOpener-4e00b49c46c6c94d9a06fc3faa8bfafe-1] regionserver.HStore(310): Store=4e00b49c46c6c94d9a06fc3faa8bfafe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 18:59:44,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 18:59:44,349 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 4e00b49c46c6c94d9a06fc3faa8bfafe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=714697, jitterRate=-0.09121629595756531}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 18:59:44,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 4e00b49c46c6c94d9a06fc3faa8bfafe: 2023-06-08 18:59:44,351 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe., pid=6, masterSystemTime=1686250784337 2023-06-08 18:59:44,353 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,353 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,353 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4e00b49c46c6c94d9a06fc3faa8bfafe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:44,353 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686250784353"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686250784353"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686250784353"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686250784353"}]},"ts":"1686250784353"} 2023-06-08 18:59:44,356 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 18:59:44,356 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 4e00b49c46c6c94d9a06fc3faa8bfafe, server=jenkins-hbase17.apache.org,46357,1686250783442 in 169 msec 2023-06-08 18:59:44,359 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 18:59:44,359 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4e00b49c46c6c94d9a06fc3faa8bfafe, ASSIGN in 327 msec 2023-06-08 18:59:44,359 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 18:59:44,360 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686250784359"}]},"ts":"1686250784359"} 2023-06-08 18:59:44,361 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 18:59:44,363 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 18:59:44,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 372 msec 2023-06-08 18:59:44,393 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 18:59:44,395 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:59:44,395 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:44,401 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 18:59:44,413 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:59:44,416 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-06-08 18:59:44,424 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 18:59:44,434 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 18:59:44,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-06-08 18:59:44,447 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 18:59:44,449 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 18:59:44,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.984sec 2023-06-08 18:59:44,449 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 18:59:44,450 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 18:59:44,450 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 18:59:44,450 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43573,1686250783406-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 18:59:44,450 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,43573,1686250783406-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 18:59:44,452 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 18:59:44,460 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ReadOnlyZKClient(139): Connect 0x43558367 to 127.0.0.1:61260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 18:59:44,466 DEBUG [Listener at localhost.localdomain/40939] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62907b3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 18:59:44,468 DEBUG [hconnection-0x2e6b2262-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 18:59:44,470 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:57662, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 18:59:44,472 INFO [Listener at localhost.localdomain/40939] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:44,472 INFO [Listener at localhost.localdomain/40939] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 18:59:44,475 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 18:59:44,475 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:44,476 INFO [Listener at localhost.localdomain/40939] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 18:59:44,476 INFO [Listener at localhost.localdomain/40939] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 18:59:44,478 INFO [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs, maxLogs=32 2023-06-08 18:59:44,484 INFO [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1/test.com%2C8080%2C1.1686250784479 2023-06-08 18:59:44,484 DEBUG [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40997,DS-4e421233-0e58-462d-acba-12b1c1ba5428,DISK], DatanodeInfoWithStorage[127.0.0.1:34283,DS-b851dc84-617e-42be-88f7-92c5ad966944,DISK]] 2023-06-08 18:59:44,493 INFO [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1/test.com%2C8080%2C1.1686250784479 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1/test.com%2C8080%2C1.1686250784485 2023-06-08 18:59:44,493 DEBUG [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34283,DS-b851dc84-617e-42be-88f7-92c5ad966944,DISK], DatanodeInfoWithStorage[127.0.0.1:40997,DS-4e421233-0e58-462d-acba-12b1c1ba5428,DISK]] 2023-06-08 18:59:44,493 DEBUG [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1/test.com%2C8080%2C1.1686250784479 is not closed yet, will try archiving it next time 2023-06-08 18:59:44,497 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1 2023-06-08 18:59:44,507 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/test.com,8080,1/test.com%2C8080%2C1.1686250784479 to hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs/test.com%2C8080%2C1.1686250784479 2023-06-08 18:59:44,510 DEBUG [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs 2023-06-08 18:59:44,510 INFO [Listener at localhost.localdomain/40939] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1686250784485) 2023-06-08 18:59:44,510 INFO [Listener at localhost.localdomain/40939] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 18:59:44,510 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x43558367 to 127.0.0.1:61260 2023-06-08 18:59:44,510 DEBUG [Listener at localhost.localdomain/40939] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:44,512 DEBUG [Listener at localhost.localdomain/40939] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 18:59:44,512 DEBUG [Listener at localhost.localdomain/40939] util.JVMClusterUtil(257): Found active master hash=1146392116, stopped=false 2023-06-08 18:59:44,512 INFO [Listener at localhost.localdomain/40939] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:44,513 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:59:44,513 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 18:59:44,513 INFO [Listener at localhost.localdomain/40939] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 18:59:44,513 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:44,514 DEBUG [Listener at localhost.localdomain/40939] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c34a6f8 to 127.0.0.1:61260 2023-06-08 18:59:44,514 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:59:44,514 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 18:59:44,514 DEBUG [Listener at localhost.localdomain/40939] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:44,514 INFO [Listener at localhost.localdomain/40939] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,46357,1686250783442' ***** 2023-06-08 18:59:44,514 INFO [Listener at localhost.localdomain/40939] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 18:59:44,515 INFO [RS:0;jenkins-hbase17:46357] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 18:59:44,515 INFO [RS:0;jenkins-hbase17:46357] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 18:59:44,515 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 18:59:44,515 INFO [RS:0;jenkins-hbase17:46357] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 18:59:44,515 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(3303): Received CLOSE for 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,515 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:44,515 DEBUG [RS:0;jenkins-hbase17:46357] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x79729eae to 127.0.0.1:61260 2023-06-08 18:59:44,516 DEBUG [RS:0;jenkins-hbase17:46357] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:44,516 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 4e00b49c46c6c94d9a06fc3faa8bfafe, disabling compactions & flushes 2023-06-08 18:59:44,516 INFO [RS:0;jenkins-hbase17:46357] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 18:59:44,516 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,516 INFO [RS:0;jenkins-hbase17:46357] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 18:59:44,516 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,516 INFO [RS:0;jenkins-hbase17:46357] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 18:59:44,516 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. after waiting 0 ms 2023-06-08 18:59:44,516 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 18:59:44,516 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,516 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 4e00b49c46c6c94d9a06fc3faa8bfafe 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 18:59:44,516 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-08 18:59:44,516 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1478): Online Regions={4e00b49c46c6c94d9a06fc3faa8bfafe=hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe., 1588230740=hbase:meta,,1.1588230740} 2023-06-08 18:59:44,516 DEBUG [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1504): Waiting on 1588230740, 4e00b49c46c6c94d9a06fc3faa8bfafe 2023-06-08 18:59:44,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 18:59:44,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 18:59:44,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 18:59:44,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 18:59:44,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 18:59:44,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-08 18:59:44,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/.tmp/info/e61be45ce84c4e6690e2641f8814ecaa 2023-06-08 18:59:44,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/.tmp/info/b3c4edd1ec204b709bfbedeb68eb7e66 2023-06-08 18:59:44,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/.tmp/info/b3c4edd1ec204b709bfbedeb68eb7e66 as hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/info/b3c4edd1ec204b709bfbedeb68eb7e66 2023-06-08 18:59:44,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/info/b3c4edd1ec204b709bfbedeb68eb7e66, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 18:59:44,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 4e00b49c46c6c94d9a06fc3faa8bfafe in 24ms, sequenceid=6, compaction requested=false 2023-06-08 18:59:44,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 18:59:44,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/.tmp/table/6d26877eebac4ad5bfbe88a7323ae293 2023-06-08 18:59:44,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/namespace/4e00b49c46c6c94d9a06fc3faa8bfafe/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 18:59:44,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 4e00b49c46c6c94d9a06fc3faa8bfafe: 2023-06-08 18:59:44,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686250783990.4e00b49c46c6c94d9a06fc3faa8bfafe. 2023-06-08 18:59:44,549 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/.tmp/info/e61be45ce84c4e6690e2641f8814ecaa as hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/info/e61be45ce84c4e6690e2641f8814ecaa 2023-06-08 18:59:44,554 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/info/e61be45ce84c4e6690e2641f8814ecaa, entries=10, sequenceid=9, filesize=5.9 K 2023-06-08 18:59:44,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/.tmp/table/6d26877eebac4ad5bfbe88a7323ae293 as hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/table/6d26877eebac4ad5bfbe88a7323ae293 2023-06-08 18:59:44,561 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/table/6d26877eebac4ad5bfbe88a7323ae293, entries=2, sequenceid=9, filesize=4.7 K 2023-06-08 18:59:44,562 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 45ms, sequenceid=9, compaction requested=false 2023-06-08 18:59:44,562 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 18:59:44,568 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-08 18:59:44,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 18:59:44,569 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 18:59:44,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 18:59:44,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 18:59:44,601 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-08 18:59:44,602 INFO [regionserver/jenkins-hbase17:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-08 18:59:44,717 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,46357,1686250783442; all regions closed. 2023-06-08 18:59:44,717 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:44,723 DEBUG [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs 2023-06-08 18:59:44,724 INFO [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C46357%2C1686250783442.meta:.meta(num 1686250783936) 2023-06-08 18:59:44,724 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/WALs/jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:44,731 DEBUG [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/oldWALs 2023-06-08 18:59:44,731 INFO [RS:0;jenkins-hbase17:46357] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase17.apache.org%2C46357%2C1686250783442:(num 1686250783728) 2023-06-08 18:59:44,731 DEBUG [RS:0;jenkins-hbase17:46357] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:44,731 INFO [RS:0;jenkins-hbase17:46357] regionserver.LeaseManager(133): Closed leases 2023-06-08 18:59:44,732 INFO [RS:0;jenkins-hbase17:46357] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-08 18:59:44,732 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:59:44,733 INFO [RS:0;jenkins-hbase17:46357] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:46357 2023-06-08 18:59:44,736 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,46357,1686250783442 2023-06-08 18:59:44,736 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:59:44,736 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 18:59:44,737 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,46357,1686250783442] 2023-06-08 18:59:44,737 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,46357,1686250783442; numProcessing=1 2023-06-08 18:59:44,738 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,46357,1686250783442 already deleted, retry=false 2023-06-08 18:59:44,738 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,46357,1686250783442 expired; onlineServers=0 2023-06-08 18:59:44,738 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase17.apache.org,43573,1686250783406' ***** 2023-06-08 18:59:44,738 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 18:59:44,739 DEBUG [M:0;jenkins-hbase17:43573] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@406b1266, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-06-08 18:59:44,739 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:44,739 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,43573,1686250783406; all regions closed. 2023-06-08 18:59:44,739 DEBUG [M:0;jenkins-hbase17:43573] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 18:59:44,739 DEBUG [M:0;jenkins-hbase17:43573] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 18:59:44,739 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 18:59:44,739 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250783566] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1686250783566,5,FailOnTimeoutGroup] 2023-06-08 18:59:44,739 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250783565] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1686250783565,5,FailOnTimeoutGroup] 2023-06-08 18:59:44,739 DEBUG [M:0;jenkins-hbase17:43573] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 18:59:44,740 INFO [M:0;jenkins-hbase17:43573] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 18:59:44,740 INFO [M:0;jenkins-hbase17:43573] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 18:59:44,740 INFO [M:0;jenkins-hbase17:43573] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-06-08 18:59:44,740 DEBUG [M:0;jenkins-hbase17:43573] master.HMaster(1512): Stopping service threads 2023-06-08 18:59:44,740 INFO [M:0;jenkins-hbase17:43573] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 18:59:44,741 ERROR [M:0;jenkins-hbase17:43573] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-08 18:59:44,741 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 18:59:44,741 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 18:59:44,741 INFO [M:0;jenkins-hbase17:43573] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 18:59:44,741 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 18:59:44,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 18:59:44,741 DEBUG [M:0;jenkins-hbase17:43573] zookeeper.ZKUtil(398): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 18:59:44,741 WARN [M:0;jenkins-hbase17:43573] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 18:59:44,741 INFO [M:0;jenkins-hbase17:43573] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 18:59:44,742 INFO [M:0;jenkins-hbase17:43573] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 18:59:44,743 DEBUG [M:0;jenkins-hbase17:43573] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 18:59:44,743 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:44,743 DEBUG [M:0;jenkins-hbase17:43573] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:44,743 DEBUG [M:0;jenkins-hbase17:43573] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 18:59:44,743 DEBUG [M:0;jenkins-hbase17:43573] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:44,743 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-06-08 18:59:44,753 INFO [M:0;jenkins-hbase17:43573] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/fe3499b1a02e4723b0bb4a651e2edd9b 2023-06-08 18:59:44,758 DEBUG [M:0;jenkins-hbase17:43573] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/fe3499b1a02e4723b0bb4a651e2edd9b as hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/fe3499b1a02e4723b0bb4a651e2edd9b 2023-06-08 18:59:44,762 INFO [M:0;jenkins-hbase17:43573] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38497/user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/fe3499b1a02e4723b0bb4a651e2edd9b, entries=8, sequenceid=66, filesize=6.3 K 2023-06-08 18:59:44,763 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=66, compaction requested=false 2023-06-08 18:59:44,764 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 18:59:44,764 DEBUG [M:0;jenkins-hbase17:43573] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 18:59:44,764 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/643221a1-fa83-72d4-174e-93e69c385ea4/MasterData/WALs/jenkins-hbase17.apache.org,43573,1686250783406 2023-06-08 18:59:44,767 INFO [M:0;jenkins-hbase17:43573] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 18:59:44,767 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 18:59:44,767 INFO [M:0;jenkins-hbase17:43573] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:43573 2023-06-08 18:59:44,769 DEBUG [M:0;jenkins-hbase17:43573] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,43573,1686250783406 already deleted, retry=false 2023-06-08 18:59:44,917 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:44,917 INFO [M:0;jenkins-hbase17:43573] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,43573,1686250783406; zookeeper connection closed. 2023-06-08 18:59:44,917 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): master:43573-0x100abce7db90000, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:45,017 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:45,017 DEBUG [Listener at localhost.localdomain/40939-EventThread] zookeeper.ZKWatcher(600): regionserver:46357-0x100abce7db90001, quorum=127.0.0.1:61260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 18:59:45,017 INFO [RS:0;jenkins-hbase17:46357] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,46357,1686250783442; zookeeper connection closed. 2023-06-08 18:59:45,019 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@678b2703] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@678b2703 2023-06-08 18:59:45,019 INFO [Listener at localhost.localdomain/40939] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 18:59:45,020 WARN [Listener at localhost.localdomain/40939] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:59:45,029 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:59:45,133 WARN [BP-1935984495-136.243.18.41-1686250782919 heartbeating to localhost.localdomain/127.0.0.1:38497] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 18:59:45,133 WARN [BP-1935984495-136.243.18.41-1686250782919 heartbeating to localhost.localdomain/127.0.0.1:38497] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1935984495-136.243.18.41-1686250782919 (Datanode Uuid 25ade0d7-7a87-49d1-9b3b-fbfeeb1a0865) service to localhost.localdomain/127.0.0.1:38497 2023-06-08 18:59:45,134 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/dfs/data/data3/current/BP-1935984495-136.243.18.41-1686250782919] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:45,134 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/dfs/data/data4/current/BP-1935984495-136.243.18.41-1686250782919] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:45,135 WARN [Listener at localhost.localdomain/40939] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 18:59:45,142 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 18:59:45,234 WARN [BP-1935984495-136.243.18.41-1686250782919 heartbeating to localhost.localdomain/127.0.0.1:38497] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1935984495-136.243.18.41-1686250782919 (Datanode Uuid 2e1cdf66-cc5e-4b6f-932e-87c8fd33c218) service to localhost.localdomain/127.0.0.1:38497 2023-06-08 18:59:45,235 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/dfs/data/data1/current/BP-1935984495-136.243.18.41-1686250782919] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:45,236 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/06ab4017-d22f-6856-a463-c47a5c2a4e1d/cluster_49106180-8597-a212-366f-c8b12fe5c3bd/dfs/data/data2/current/BP-1935984495-136.243.18.41-1686250782919] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 18:59:45,264 INFO [Listener at localhost.localdomain/40939] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 18:59:45,373 INFO [Listener at localhost.localdomain/40939] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 18:59:45,383 INFO [Listener at localhost.localdomain/40939] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 18:59:45,396 INFO [Listener at localhost.localdomain/40939] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=132 (was 107) - Thread LEAK? -, OpenFileDescriptor=560 (was 532) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=229 (was 229), ProcessCount=184 (was 184), AvailableMemoryMB=839 (was 875)