2023-06-08 16:53:33,255 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c 2023-06-08 16:53:33,266 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-06-08 16:53:33,298 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=263, MaxFileDescriptor=60000, SystemLoadAverage=204, ProcessCount=187, AvailableMemoryMB=2898 2023-06-08 16:53:33,304 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 16:53:33,304 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77, deleteOnExit=true 2023-06-08 16:53:33,305 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 16:53:33,305 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/test.cache.data in system properties and HBase conf 2023-06-08 16:53:33,306 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 16:53:33,306 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/hadoop.log.dir in system properties and HBase conf 2023-06-08 16:53:33,306 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 16:53:33,307 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 16:53:33,307 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 16:53:33,400 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-06-08 16:53:33,727 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 16:53:33,730 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:53:33,731 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:53:33,731 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 16:53:33,732 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:53:33,732 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 16:53:33,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 16:53:33,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:53:33,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:53:33,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 16:53:33,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/nfs.dump.dir in system properties and HBase conf 2023-06-08 16:53:33,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/java.io.tmpdir in system properties and HBase conf 2023-06-08 16:53:33,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:53:33,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 16:53:33,736 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 16:53:34,144 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:53:34,155 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:53:34,158 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:53:34,379 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-06-08 16:53:34,505 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-06-08 16:53:34,518 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:53:34,548 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:53:34,600 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/java.io.tmpdir/Jetty_localhost_localdomain_34781_hdfs____8pxsuh/webapp 2023-06-08 16:53:34,716 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:34781 2023-06-08 16:53:34,723 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:53:34,725 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:53:34,725 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:53:35,074 WARN [Listener at localhost.localdomain/33111] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:53:35,131 WARN [Listener at localhost.localdomain/33111] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:53:35,149 WARN [Listener at localhost.localdomain/33111] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:53:35,155 INFO [Listener at localhost.localdomain/33111] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:53:35,160 INFO [Listener at localhost.localdomain/33111] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/java.io.tmpdir/Jetty_localhost_36457_datanode____z345bu/webapp 2023-06-08 16:53:35,241 INFO [Listener at localhost.localdomain/33111] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36457 2023-06-08 16:53:35,523 WARN [Listener at localhost.localdomain/39885] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:53:35,531 WARN [Listener at localhost.localdomain/39885] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:53:35,535 WARN [Listener at localhost.localdomain/39885] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:53:35,537 INFO [Listener at localhost.localdomain/39885] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:53:35,541 INFO [Listener at localhost.localdomain/39885] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/java.io.tmpdir/Jetty_localhost_36533_datanode____p4aho9/webapp 2023-06-08 16:53:35,621 INFO [Listener at localhost.localdomain/39885] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36533 2023-06-08 16:53:35,628 WARN [Listener at localhost.localdomain/38529] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:53:35,900 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe5a7e20153255e: Processing first storage report for DS-98f3a072-5aa3-42b1-b053-c48411653aba from datanode c8159548-fe81-41b3-8861-346c593825fa 2023-06-08 16:53:35,901 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe5a7e20153255e: from storage DS-98f3a072-5aa3-42b1-b053-c48411653aba node DatanodeRegistration(127.0.0.1:33759, datanodeUuid=c8159548-fe81-41b3-8861-346c593825fa, infoPort=35811, infoSecurePort=0, ipcPort=39885, storageInfo=lv=-57;cid=testClusterID;nsid=1619108804;c=1686243214216), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:53:35,901 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8fec41c9f9176f7d: Processing first storage report for DS-512bd96b-4315-4608-8c40-c52451a39796 from datanode 1bd71afd-d1eb-4924-a704-10683bf11362 2023-06-08 16:53:35,901 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8fec41c9f9176f7d: from storage DS-512bd96b-4315-4608-8c40-c52451a39796 node DatanodeRegistration(127.0.0.1:45383, datanodeUuid=1bd71afd-d1eb-4924-a704-10683bf11362, infoPort=33059, infoSecurePort=0, ipcPort=38529, storageInfo=lv=-57;cid=testClusterID;nsid=1619108804;c=1686243214216), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:53:35,901 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe5a7e20153255e: Processing first storage report for DS-53aa240c-768d-4bff-b5c8-4177c8998fe4 from datanode c8159548-fe81-41b3-8861-346c593825fa 2023-06-08 16:53:35,901 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe5a7e20153255e: from storage DS-53aa240c-768d-4bff-b5c8-4177c8998fe4 node DatanodeRegistration(127.0.0.1:33759, datanodeUuid=c8159548-fe81-41b3-8861-346c593825fa, infoPort=35811, infoSecurePort=0, ipcPort=39885, storageInfo=lv=-57;cid=testClusterID;nsid=1619108804;c=1686243214216), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:53:35,902 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8fec41c9f9176f7d: Processing first storage report for DS-5445c075-d597-4e06-a7ad-8a8bb4465651 from datanode 1bd71afd-d1eb-4924-a704-10683bf11362 2023-06-08 16:53:35,902 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8fec41c9f9176f7d: from storage DS-5445c075-d597-4e06-a7ad-8a8bb4465651 node DatanodeRegistration(127.0.0.1:45383, datanodeUuid=1bd71afd-d1eb-4924-a704-10683bf11362, infoPort=33059, infoSecurePort=0, ipcPort=38529, storageInfo=lv=-57;cid=testClusterID;nsid=1619108804;c=1686243214216), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:53:35,961 DEBUG [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c 2023-06-08 16:53:36,020 INFO [Listener at localhost.localdomain/38529] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/zookeeper_0, clientPort=62547, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 16:53:36,032 INFO [Listener at localhost.localdomain/38529] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62547 2023-06-08 16:53:36,041 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:36,043 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:36,653 INFO [Listener at localhost.localdomain/38529] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb with version=8 2023-06-08 16:53:36,653 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase-staging 2023-06-08 16:53:36,904 INFO [Listener at localhost.localdomain/38529] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-06-08 16:53:37,270 INFO [Listener at localhost.localdomain/38529] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:53:37,295 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:53:37,296 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:53:37,296 INFO [Listener at localhost.localdomain/38529] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:53:37,296 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:53:37,297 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:53:37,418 INFO [Listener at localhost.localdomain/38529] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:53:37,478 DEBUG [Listener at localhost.localdomain/38529] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-06-08 16:53:37,556 INFO [Listener at localhost.localdomain/38529] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37063 2023-06-08 16:53:37,566 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:37,569 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:37,590 INFO [Listener at localhost.localdomain/38529] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37063 connecting to ZooKeeper ensemble=127.0.0.1:62547 2023-06-08 16:53:37,623 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:370630x0, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:53:37,625 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37063-0x101cba3a2ae0000 connected 2023-06-08 16:53:37,645 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:53:37,646 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:53:37,650 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:53:37,658 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37063 2023-06-08 16:53:37,658 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37063 2023-06-08 16:53:37,658 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37063 2023-06-08 16:53:37,659 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37063 2023-06-08 16:53:37,659 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37063 2023-06-08 16:53:37,664 INFO [Listener at localhost.localdomain/38529] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb, hbase.cluster.distributed=false 2023-06-08 16:53:37,722 INFO [Listener at localhost.localdomain/38529] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:53:37,722 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:53:37,722 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:53:37,722 INFO [Listener at localhost.localdomain/38529] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:53:37,723 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:53:37,723 INFO [Listener at localhost.localdomain/38529] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:53:37,727 INFO [Listener at localhost.localdomain/38529] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:53:37,729 INFO [Listener at localhost.localdomain/38529] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45311 2023-06-08 16:53:37,731 INFO [Listener at localhost.localdomain/38529] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:53:37,736 DEBUG [Listener at localhost.localdomain/38529] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:53:37,737 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:37,739 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:37,740 INFO [Listener at localhost.localdomain/38529] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45311 connecting to ZooKeeper ensemble=127.0.0.1:62547 2023-06-08 16:53:37,744 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:453110x0, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:53:37,745 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45311-0x101cba3a2ae0001 connected 2023-06-08 16:53:37,745 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ZKUtil(164): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:53:37,747 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ZKUtil(164): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:53:37,748 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ZKUtil(164): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:53:37,749 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45311 2023-06-08 16:53:37,749 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45311 2023-06-08 16:53:37,750 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45311 2023-06-08 16:53:37,750 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45311 2023-06-08 16:53:37,750 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45311 2023-06-08 16:53:37,752 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:37,760 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:53:37,762 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:37,781 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:53:37,781 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:53:37,781 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:37,782 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:53:37,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,37063,1686243216769 from backup master directory 2023-06-08 16:53:37,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:53:37,786 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:37,786 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:53:37,787 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:53:37,787 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:37,789 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-06-08 16:53:37,790 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-06-08 16:53:37,875 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase.id with ID: d0762157-244c-46f7-b085-45c80fa462b7 2023-06-08 16:53:37,921 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:37,936 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:37,976 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x45ad165b to 127.0.0.1:62547 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:53:38,006 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4edeb488, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:53:38,025 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:53:38,027 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 16:53:38,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:53:38,061 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store-tmp 2023-06-08 16:53:38,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:38,088 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:53:38,089 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:53:38,089 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:53:38,089 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:53:38,089 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:53:38,089 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:53:38,089 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:53:38,091 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/WALs/jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:38,111 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37063%2C1686243216769, suffix=, logDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/WALs/jenkins-hbase20.apache.org,37063,1686243216769, archiveDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/oldWALs, maxLogs=10 2023-06-08 16:53:38,128 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:53:38,152 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/WALs/jenkins-hbase20.apache.org,37063,1686243216769/jenkins-hbase20.apache.org%2C37063%2C1686243216769.1686243218126 2023-06-08 16:53:38,152 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:53:38,153 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:53:38,153 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:38,156 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:53:38,158 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:53:38,207 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:53:38,216 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 16:53:38,237 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 16:53:38,252 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:38,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:53:38,260 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:53:38,274 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:53:38,279 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:53:38,280 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=700017, jitterRate=-0.10988301038742065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:53:38,280 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:53:38,282 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 16:53:38,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 16:53:38,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 16:53:38,306 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 16:53:38,308 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-06-08 16:53:38,338 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 29 msec 2023-06-08 16:53:38,338 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 16:53:38,360 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 16:53:38,365 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 16:53:38,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 16:53:38,391 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 16:53:38,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 16:53:38,397 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 16:53:38,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 16:53:38,403 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:38,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 16:53:38,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 16:53:38,416 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 16:53:38,419 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:53:38,419 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:53:38,419 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:38,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,37063,1686243216769, sessionid=0x101cba3a2ae0000, setting cluster-up flag (Was=false) 2023-06-08 16:53:38,432 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:38,436 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 16:53:38,437 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:38,441 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:38,444 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 16:53:38,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:38,447 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.hbase-snapshot/.tmp 2023-06-08 16:53:38,453 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(951): ClusterId : d0762157-244c-46f7-b085-45c80fa462b7 2023-06-08 16:53:38,458 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:53:38,462 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:53:38,462 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:53:38,464 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:53:38,465 DEBUG [RS:0;jenkins-hbase20:45311] zookeeper.ReadOnlyZKClient(139): Connect 0x7124323b to 127.0.0.1:62547 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:53:38,468 DEBUG [RS:0;jenkins-hbase20:45311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7fe3d26f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:53:38,469 DEBUG [RS:0;jenkins-hbase20:45311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bdc594b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:53:38,490 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:45311 2023-06-08 16:53:38,493 INFO [RS:0;jenkins-hbase20:45311] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:53:38,493 INFO [RS:0;jenkins-hbase20:45311] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:53:38,493 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:53:38,496 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,37063,1686243216769 with isa=jenkins-hbase20.apache.org/148.251.75.209:45311, startcode=1686243217721 2023-06-08 16:53:38,511 DEBUG [RS:0;jenkins-hbase20:45311] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:53:38,549 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 16:53:38,559 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:53:38,559 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:53:38,559 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:53:38,560 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:53:38,560 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-08 16:53:38,560 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,560 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:53:38,560 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,562 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686243248562 2023-06-08 16:53:38,563 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 16:53:38,570 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:53:38,571 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 16:53:38,575 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 16:53:38,577 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:53:38,582 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 16:53:38,582 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 16:53:38,583 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 16:53:38,583 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 16:53:38,584 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,587 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 16:53:38,589 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 16:53:38,589 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 16:53:38,594 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 16:53:38,595 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 16:53:38,596 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243218596,5,FailOnTimeoutGroup] 2023-06-08 16:53:38,597 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243218597,5,FailOnTimeoutGroup] 2023-06-08 16:53:38,597 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,597 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 16:53:38,599 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,599 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,624 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:53:38,630 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:53:38,630 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb 2023-06-08 16:53:38,632 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55275, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:53:38,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:38,653 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:38,658 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:53:38,661 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/info 2023-06-08 16:53:38,662 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:53:38,663 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:38,664 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:53:38,667 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:53:38,668 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:53:38,669 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:38,669 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:53:38,672 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb 2023-06-08 16:53:38,672 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/table 2023-06-08 16:53:38,672 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33111 2023-06-08 16:53:38,672 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:53:38,673 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:53:38,674 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:38,676 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740 2023-06-08 16:53:38,678 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740 2023-06-08 16:53:38,679 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:53:38,680 DEBUG [RS:0;jenkins-hbase20:45311] zookeeper.ZKUtil(162): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:38,680 WARN [RS:0;jenkins-hbase20:45311] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:53:38,680 INFO [RS:0;jenkins-hbase20:45311] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:53:38,681 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:38,683 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45311,1686243217721] 2023-06-08 16:53:38,684 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:53:38,686 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:53:38,690 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:53:38,691 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=873570, jitterRate=0.11080203950405121}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:53:38,691 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:53:38,691 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:53:38,691 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:53:38,691 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:53:38,691 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:53:38,691 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:53:38,692 DEBUG [RS:0;jenkins-hbase20:45311] zookeeper.ZKUtil(162): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:38,692 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:53:38,692 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:53:38,699 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:53:38,699 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 16:53:38,706 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:53:38,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 16:53:38,717 INFO [RS:0;jenkins-hbase20:45311] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:53:38,723 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 16:53:38,725 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 16:53:38,739 INFO [RS:0;jenkins-hbase20:45311] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:53:38,742 INFO [RS:0;jenkins-hbase20:45311] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:53:38,743 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,743 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:53:38,750 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,750 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,750 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,750 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,751 DEBUG [RS:0;jenkins-hbase20:45311] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:53:38,752 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,752 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,752 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,772 INFO [RS:0;jenkins-hbase20:45311] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:53:38,774 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45311,1686243217721-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:38,786 INFO [RS:0;jenkins-hbase20:45311] regionserver.Replication(203): jenkins-hbase20.apache.org,45311,1686243217721 started 2023-06-08 16:53:38,786 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45311,1686243217721, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45311, sessionid=0x101cba3a2ae0001 2023-06-08 16:53:38,787 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:53:38,787 DEBUG [RS:0;jenkins-hbase20:45311] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:38,787 DEBUG [RS:0;jenkins-hbase20:45311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45311,1686243217721' 2023-06-08 16:53:38,787 DEBUG [RS:0;jenkins-hbase20:45311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:53:38,788 DEBUG [RS:0;jenkins-hbase20:45311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:53:38,788 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:53:38,788 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:53:38,788 DEBUG [RS:0;jenkins-hbase20:45311] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:38,788 DEBUG [RS:0;jenkins-hbase20:45311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45311,1686243217721' 2023-06-08 16:53:38,788 DEBUG [RS:0;jenkins-hbase20:45311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:53:38,789 DEBUG [RS:0;jenkins-hbase20:45311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:53:38,789 DEBUG [RS:0;jenkins-hbase20:45311] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:53:38,789 INFO [RS:0;jenkins-hbase20:45311] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:53:38,789 INFO [RS:0;jenkins-hbase20:45311] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:53:38,880 DEBUG [jenkins-hbase20:37063] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 16:53:38,885 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45311,1686243217721, state=OPENING 2023-06-08 16:53:38,895 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 16:53:38,896 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:38,897 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:53:38,900 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45311,1686243217721}] 2023-06-08 16:53:38,901 INFO [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45311%2C1686243217721, suffix=, logDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721, archiveDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/oldWALs, maxLogs=32 2023-06-08 16:53:38,920 INFO [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243218904 2023-06-08 16:53:38,920 DEBUG [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:53:39,092 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:39,094 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:53:39,098 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46008, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:53:39,109 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 16:53:39,110 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:53:39,113 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45311%2C1686243217721.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721, archiveDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/oldWALs, maxLogs=32 2023-06-08 16:53:39,127 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.meta.1686243219115.meta 2023-06-08 16:53:39,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK], DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK]] 2023-06-08 16:53:39,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:53:39,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 16:53:39,144 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 16:53:39,149 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 16:53:39,154 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 16:53:39,154 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:39,154 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 16:53:39,154 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 16:53:39,157 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:53:39,158 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/info 2023-06-08 16:53:39,159 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/info 2023-06-08 16:53:39,159 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:53:39,160 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:39,160 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:53:39,161 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:53:39,161 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:53:39,162 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:53:39,163 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:39,163 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:53:39,164 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/table 2023-06-08 16:53:39,164 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/table 2023-06-08 16:53:39,165 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:53:39,166 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:39,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740 2023-06-08 16:53:39,171 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740 2023-06-08 16:53:39,175 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:53:39,178 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:53:39,179 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=865103, jitterRate=0.10003572702407837}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:53:39,180 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:53:39,190 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686243219086 2023-06-08 16:53:39,206 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 16:53:39,207 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 16:53:39,207 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45311,1686243217721, state=OPEN 2023-06-08 16:53:39,209 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 16:53:39,209 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:53:39,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 16:53:39,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45311,1686243217721 in 309 msec 2023-06-08 16:53:39,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 16:53:39,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 509 msec 2023-06-08 16:53:39,228 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 733 msec 2023-06-08 16:53:39,228 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686243219228, completionTime=-1 2023-06-08 16:53:39,229 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 16:53:39,229 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 16:53:39,284 DEBUG [hconnection-0x587b18b8-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:53:39,287 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:53:39,302 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 16:53:39,302 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686243279302 2023-06-08 16:53:39,302 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686243339302 2023-06-08 16:53:39,302 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-06-08 16:53:39,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37063,1686243216769-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:39,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37063,1686243216769-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:39,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37063,1686243216769-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:39,327 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:37063, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:39,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 16:53:39,333 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 16:53:39,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 16:53:39,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:53:39,351 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 16:53:39,354 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:53:39,357 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:53:39,381 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,383 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a empty. 2023-06-08 16:53:39,384 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,384 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 16:53:39,437 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 16:53:39,439 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2d6f4b9a6e44b307f851fb972dc8975a, NAME => 'hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp 2023-06-08 16:53:39,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:39,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2d6f4b9a6e44b307f851fb972dc8975a, disabling compactions & flushes 2023-06-08 16:53:39,456 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. after waiting 0 ms 2023-06-08 16:53:39,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,456 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2d6f4b9a6e44b307f851fb972dc8975a: 2023-06-08 16:53:39,461 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:53:39,478 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243219464"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243219464"}]},"ts":"1686243219464"} 2023-06-08 16:53:39,500 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:53:39,502 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:53:39,506 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243219502"}]},"ts":"1686243219502"} 2023-06-08 16:53:39,510 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 16:53:39,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2d6f4b9a6e44b307f851fb972dc8975a, ASSIGN}] 2023-06-08 16:53:39,520 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2d6f4b9a6e44b307f851fb972dc8975a, ASSIGN 2023-06-08 16:53:39,522 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2d6f4b9a6e44b307f851fb972dc8975a, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45311,1686243217721; forceNewPlan=false, retain=false 2023-06-08 16:53:39,674 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2d6f4b9a6e44b307f851fb972dc8975a, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:39,675 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243219673"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243219673"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243219673"}]},"ts":"1686243219673"} 2023-06-08 16:53:39,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 2d6f4b9a6e44b307f851fb972dc8975a, server=jenkins-hbase20.apache.org,45311,1686243217721}] 2023-06-08 16:53:39,853 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d6f4b9a6e44b307f851fb972dc8975a, NAME => 'hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:53:39,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:39,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,858 INFO [StoreOpener-2d6f4b9a6e44b307f851fb972dc8975a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,860 DEBUG [StoreOpener-2d6f4b9a6e44b307f851fb972dc8975a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/info 2023-06-08 16:53:39,860 DEBUG [StoreOpener-2d6f4b9a6e44b307f851fb972dc8975a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/info 2023-06-08 16:53:39,861 INFO [StoreOpener-2d6f4b9a6e44b307f851fb972dc8975a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d6f4b9a6e44b307f851fb972dc8975a columnFamilyName info 2023-06-08 16:53:39,862 INFO [StoreOpener-2d6f4b9a6e44b307f851fb972dc8975a-1] regionserver.HStore(310): Store=2d6f4b9a6e44b307f851fb972dc8975a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:39,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,869 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:53:39,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:53:39,873 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2d6f4b9a6e44b307f851fb972dc8975a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=712670, jitterRate=-0.09379348158836365}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:53:39,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2d6f4b9a6e44b307f851fb972dc8975a: 2023-06-08 16:53:39,876 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a., pid=6, masterSystemTime=1686243219839 2023-06-08 16:53:39,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,881 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:53:39,882 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2d6f4b9a6e44b307f851fb972dc8975a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:39,883 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243219881"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243219881"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243219881"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243219881"}]},"ts":"1686243219881"} 2023-06-08 16:53:39,890 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 16:53:39,890 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 2d6f4b9a6e44b307f851fb972dc8975a, server=jenkins-hbase20.apache.org,45311,1686243217721 in 203 msec 2023-06-08 16:53:39,893 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 16:53:39,894 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2d6f4b9a6e44b307f851fb972dc8975a, ASSIGN in 374 msec 2023-06-08 16:53:39,895 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:53:39,896 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243219896"}]},"ts":"1686243219896"} 2023-06-08 16:53:39,900 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 16:53:39,904 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:53:39,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 560 msec 2023-06-08 16:53:39,954 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 16:53:39,956 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:53:39,956 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:39,996 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 16:53:40,015 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:53:40,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 30 msec 2023-06-08 16:53:40,030 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 16:53:40,043 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:53:40,048 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-06-08 16:53:40,057 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 16:53:40,059 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 16:53:40,059 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.272sec 2023-06-08 16:53:40,062 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 16:53:40,064 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 16:53:40,064 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 16:53:40,066 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37063,1686243216769-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 16:53:40,066 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37063,1686243216769-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 16:53:40,077 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 16:53:40,164 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ReadOnlyZKClient(139): Connect 0x267be37b to 127.0.0.1:62547 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:53:40,168 DEBUG [Listener at localhost.localdomain/38529] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d42e23c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:53:40,181 DEBUG [hconnection-0x14e03c5a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:53:40,196 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:53:40,205 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:53:40,205 INFO [Listener at localhost.localdomain/38529] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:53:40,214 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 16:53:40,214 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:53:40,216 INFO [Listener at localhost.localdomain/38529] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 16:53:40,226 DEBUG [Listener at localhost.localdomain/38529] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 16:53:40,230 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54188, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 16:53:40,239 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 16:53:40,239 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 16:53:40,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:53:40,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-06-08 16:53:40,247 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:53:40,249 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:53:40,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-06-08 16:53:40,254 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,255 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676 empty. 2023-06-08 16:53:40,256 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,256 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-06-08 16:53:40,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:53:40,284 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-08 16:53:40,286 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c95437b0a727e78f43bf7afc82f6d676, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/.tmp 2023-06-08 16:53:40,304 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:40,305 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing c95437b0a727e78f43bf7afc82f6d676, disabling compactions & flushes 2023-06-08 16:53:40,305 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,305 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,305 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. after waiting 0 ms 2023-06-08 16:53:40,305 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,305 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,306 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:53:40,311 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:53:40,313 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686243220313"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243220313"}]},"ts":"1686243220313"} 2023-06-08 16:53:40,316 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:53:40,317 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:53:40,317 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243220317"}]},"ts":"1686243220317"} 2023-06-08 16:53:40,319 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-06-08 16:53:40,322 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=c95437b0a727e78f43bf7afc82f6d676, ASSIGN}] 2023-06-08 16:53:40,325 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=c95437b0a727e78f43bf7afc82f6d676, ASSIGN 2023-06-08 16:53:40,327 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=c95437b0a727e78f43bf7afc82f6d676, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45311,1686243217721; forceNewPlan=false, retain=false 2023-06-08 16:53:40,479 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c95437b0a727e78f43bf7afc82f6d676, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:40,480 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686243220479"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243220479"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243220479"}]},"ts":"1686243220479"} 2023-06-08 16:53:40,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c95437b0a727e78f43bf7afc82f6d676, server=jenkins-hbase20.apache.org,45311,1686243217721}] 2023-06-08 16:53:40,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c95437b0a727e78f43bf7afc82f6d676, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:53:40,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:53:40,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,656 INFO [StoreOpener-c95437b0a727e78f43bf7afc82f6d676-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,659 DEBUG [StoreOpener-c95437b0a727e78f43bf7afc82f6d676-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info 2023-06-08 16:53:40,659 DEBUG [StoreOpener-c95437b0a727e78f43bf7afc82f6d676-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info 2023-06-08 16:53:40,660 INFO [StoreOpener-c95437b0a727e78f43bf7afc82f6d676-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c95437b0a727e78f43bf7afc82f6d676 columnFamilyName info 2023-06-08 16:53:40,661 INFO [StoreOpener-c95437b0a727e78f43bf7afc82f6d676-1] regionserver.HStore(310): Store=c95437b0a727e78f43bf7afc82f6d676/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:53:40,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:53:40,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:53:40,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c95437b0a727e78f43bf7afc82f6d676; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=844525, jitterRate=0.07386977970600128}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:53:40,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:53:40,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676., pid=11, masterSystemTime=1686243220643 2023-06-08 16:53:40,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:53:40,680 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c95437b0a727e78f43bf7afc82f6d676, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:53:40,680 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1686243220680"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243220680"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243220680"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243220680"}]},"ts":"1686243220680"} 2023-06-08 16:53:40,687 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 16:53:40,687 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c95437b0a727e78f43bf7afc82f6d676, server=jenkins-hbase20.apache.org,45311,1686243217721 in 196 msec 2023-06-08 16:53:40,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 16:53:40,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=c95437b0a727e78f43bf7afc82f6d676, ASSIGN in 365 msec 2023-06-08 16:53:40,693 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:53:40,693 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243220693"}]},"ts":"1686243220693"} 2023-06-08 16:53:40,696 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-06-08 16:53:40,699 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:53:40,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 456 msec 2023-06-08 16:53:44,670 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-06-08 16:53:44,736 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 16:53:44,738 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 16:53:44,738 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-06-08 16:53:46,900 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 16:53:46,902 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-06-08 16:53:50,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37063] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:53:50,274 INFO [Listener at localhost.localdomain/38529] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-06-08 16:53:50,279 DEBUG [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-06-08 16:53:50,280 DEBUG [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:54:02,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45311] regionserver.HRegion(9158): Flush requested on c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:54:02,338 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c95437b0a727e78f43bf7afc82f6d676 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:54:02,404 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/04e3b96a568943c88c12632d3d49b94b 2023-06-08 16:54:02,450 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/04e3b96a568943c88c12632d3d49b94b as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b 2023-06-08 16:54:02,460 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b, entries=7, sequenceid=11, filesize=12.1 K 2023-06-08 16:54:02,463 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for c95437b0a727e78f43bf7afc82f6d676 in 125ms, sequenceid=11, compaction requested=false 2023-06-08 16:54:02,465 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:54:10,561 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:12,770 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:14,976 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:17,182 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:17,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45311] regionserver.HRegion(9158): Flush requested on c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:54:17,183 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c95437b0a727e78f43bf7afc82f6d676 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:54:17,387 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:17,409 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/834d016733de455fa6dd6865adeb2276 2023-06-08 16:54:17,419 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/834d016733de455fa6dd6865adeb2276 as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/834d016733de455fa6dd6865adeb2276 2023-06-08 16:54:17,429 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/834d016733de455fa6dd6865adeb2276, entries=7, sequenceid=21, filesize=12.1 K 2023-06-08 16:54:17,632 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:17,633 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for c95437b0a727e78f43bf7afc82f6d676 in 450ms, sequenceid=21, compaction requested=false 2023-06-08 16:54:17,634 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:54:17,634 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-06-08 16:54:17,634 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:54:17,636 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b because midkey is the same as first or last row 2023-06-08 16:54:19,389 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:21,595 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:21,599 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C45311%2C1686243217721:(num 1686243218904) roll requested 2023-06-08 16:54:21,599 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 207 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:21,817 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:21,819 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243218904 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243261600 2023-06-08 16:54:21,821 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:21,821 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243218904 is not closed yet, will try archiving it next time 2023-06-08 16:54:31,619 INFO [Listener at localhost.localdomain/38529] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-08 16:54:36,622 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:36,623 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:36,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45311] regionserver.HRegion(9158): Flush requested on c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:54:36,623 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C45311%2C1686243217721:(num 1686243261600) roll requested 2023-06-08 16:54:36,623 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c95437b0a727e78f43bf7afc82f6d676 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:54:38,624 INFO [Listener at localhost.localdomain/38529] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-06-08 16:54:41,625 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:41,626 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:41,641 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:41,642 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:41,644 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243261600 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243276623 2023-06-08 16:54:41,644 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33759,DS-98f3a072-5aa3-42b1-b053-c48411653aba,DISK], DatanodeInfoWithStorage[127.0.0.1:45383,DS-512bd96b-4315-4608-8c40-c52451a39796,DISK]] 2023-06-08 16:54:41,644 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243261600 is not closed yet, will try archiving it next time 2023-06-08 16:54:41,650 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/7154b2c42e3845df86d08d5101709155 2023-06-08 16:54:41,660 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/7154b2c42e3845df86d08d5101709155 as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7154b2c42e3845df86d08d5101709155 2023-06-08 16:54:41,668 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7154b2c42e3845df86d08d5101709155, entries=7, sequenceid=31, filesize=12.1 K 2023-06-08 16:54:41,671 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for c95437b0a727e78f43bf7afc82f6d676 in 5048ms, sequenceid=31, compaction requested=true 2023-06-08 16:54:41,671 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:54:41,671 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-06-08 16:54:41,671 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:54:41,671 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b because midkey is the same as first or last row 2023-06-08 16:54:41,673 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:54:41,673 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:54:41,678 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:54:41,680 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.HStore(1912): c95437b0a727e78f43bf7afc82f6d676/info is initiating minor compaction (all files) 2023-06-08 16:54:41,680 INFO [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c95437b0a727e78f43bf7afc82f6d676/info in TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:54:41,680 INFO [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/834d016733de455fa6dd6865adeb2276, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7154b2c42e3845df86d08d5101709155] into tmpdir=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp, totalSize=36.3 K 2023-06-08 16:54:41,682 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] compactions.Compactor(207): Compacting 04e3b96a568943c88c12632d3d49b94b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686243230286 2023-06-08 16:54:41,683 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] compactions.Compactor(207): Compacting 834d016733de455fa6dd6865adeb2276, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1686243244340 2023-06-08 16:54:41,683 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] compactions.Compactor(207): Compacting 7154b2c42e3845df86d08d5101709155, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1686243259186 2023-06-08 16:54:41,714 INFO [RS:0;jenkins-hbase20:45311-shortCompactions-0] throttle.PressureAwareThroughputController(145): c95437b0a727e78f43bf7afc82f6d676#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:54:41,737 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/7113e83f17404b0487c925d41201dadc as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7113e83f17404b0487c925d41201dadc 2023-06-08 16:54:41,753 INFO [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c95437b0a727e78f43bf7afc82f6d676/info of c95437b0a727e78f43bf7afc82f6d676 into 7113e83f17404b0487c925d41201dadc(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:54:41,753 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:54:41,753 INFO [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676., storeName=c95437b0a727e78f43bf7afc82f6d676/info, priority=13, startTime=1686243281673; duration=0sec 2023-06-08 16:54:41,754 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-06-08 16:54:41,754 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:54:41,755 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7113e83f17404b0487c925d41201dadc because midkey is the same as first or last row 2023-06-08 16:54:41,755 DEBUG [RS:0;jenkins-hbase20:45311-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:54:42,053 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243261600 to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/oldWALs/jenkins-hbase20.apache.org%2C45311%2C1686243217721.1686243261600 2023-06-08 16:54:53,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45311] regionserver.HRegion(9158): Flush requested on c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:54:53,762 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c95437b0a727e78f43bf7afc82f6d676 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:54:53,784 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/fd314ee05f984021a1cae2c2961e3af7 2023-06-08 16:54:53,796 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/fd314ee05f984021a1cae2c2961e3af7 as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/fd314ee05f984021a1cae2c2961e3af7 2023-06-08 16:54:53,806 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/fd314ee05f984021a1cae2c2961e3af7, entries=7, sequenceid=42, filesize=12.1 K 2023-06-08 16:54:53,808 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for c95437b0a727e78f43bf7afc82f6d676 in 45ms, sequenceid=42, compaction requested=false 2023-06-08 16:54:53,808 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:54:53,808 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-06-08 16:54:53,808 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:54:53,808 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7113e83f17404b0487c925d41201dadc because midkey is the same as first or last row 2023-06-08 16:55:01,774 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 16:55:01,779 INFO [Listener at localhost.localdomain/38529] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 16:55:01,779 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x267be37b to 127.0.0.1:62547 2023-06-08 16:55:01,780 DEBUG [Listener at localhost.localdomain/38529] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:01,782 DEBUG [Listener at localhost.localdomain/38529] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 16:55:01,783 DEBUG [Listener at localhost.localdomain/38529] util.JVMClusterUtil(257): Found active master hash=1857752430, stopped=false 2023-06-08 16:55:01,783 INFO [Listener at localhost.localdomain/38529] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:55:01,786 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:01,786 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:01,786 INFO [Listener at localhost.localdomain/38529] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 16:55:01,786 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:01,786 DEBUG [Listener at localhost.localdomain/38529] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x45ad165b to 127.0.0.1:62547 2023-06-08 16:55:01,787 DEBUG [Listener at localhost.localdomain/38529] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:01,787 INFO [Listener at localhost.localdomain/38529] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,45311,1686243217721' ***** 2023-06-08 16:55:01,787 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:01,787 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:01,787 INFO [Listener at localhost.localdomain/38529] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:55:01,787 INFO [RS:0;jenkins-hbase20:45311] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:55:01,788 INFO [RS:0;jenkins-hbase20:45311] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:55:01,788 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:55:01,788 INFO [RS:0;jenkins-hbase20:45311] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:55:01,788 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(3303): Received CLOSE for 2d6f4b9a6e44b307f851fb972dc8975a 2023-06-08 16:55:01,789 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(3303): Received CLOSE for c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:55:01,789 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:55:01,789 DEBUG [RS:0;jenkins-hbase20:45311] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7124323b to 127.0.0.1:62547 2023-06-08 16:55:01,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2d6f4b9a6e44b307f851fb972dc8975a, disabling compactions & flushes 2023-06-08 16:55:01,789 DEBUG [RS:0;jenkins-hbase20:45311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:01,789 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:55:01,790 INFO [RS:0;jenkins-hbase20:45311] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:55:01,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:55:01,790 INFO [RS:0;jenkins-hbase20:45311] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:55:01,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. after waiting 0 ms 2023-06-08 16:55:01,790 INFO [RS:0;jenkins-hbase20:45311] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:55:01,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:55:01,790 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:55:01,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 2d6f4b9a6e44b307f851fb972dc8975a 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 16:55:01,790 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-08 16:55:01,790 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 2d6f4b9a6e44b307f851fb972dc8975a=hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a., c95437b0a727e78f43bf7afc82f6d676=TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.} 2023-06-08 16:55:01,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:55:01,791 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:55:01,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:55:01,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:55:01,791 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:55:01,791 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-06-08 16:55:01,792 DEBUG [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1504): Waiting on 1588230740, 2d6f4b9a6e44b307f851fb972dc8975a, c95437b0a727e78f43bf7afc82f6d676 2023-06-08 16:55:01,814 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/.tmp/info/0c348a20a1104de0bf29a2328c01b5dd 2023-06-08 16:55:01,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/.tmp/info/9d037458f53f472f982b6c2ea87fdb48 2023-06-08 16:55:01,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/.tmp/info/9d037458f53f472f982b6c2ea87fdb48 as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/info/9d037458f53f472f982b6c2ea87fdb48 2023-06-08 16:55:01,837 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/.tmp/table/d82b02c26dc848bb9a10be78c76b3977 2023-06-08 16:55:01,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/info/9d037458f53f472f982b6c2ea87fdb48, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 16:55:01,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 2d6f4b9a6e44b307f851fb972dc8975a in 50ms, sequenceid=6, compaction requested=false 2023-06-08 16:55:01,848 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/.tmp/info/0c348a20a1104de0bf29a2328c01b5dd as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/info/0c348a20a1104de0bf29a2328c01b5dd 2023-06-08 16:55:01,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/namespace/2d6f4b9a6e44b307f851fb972dc8975a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 16:55:01,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:55:01,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2d6f4b9a6e44b307f851fb972dc8975a: 2023-06-08 16:55:01,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686243219342.2d6f4b9a6e44b307f851fb972dc8975a. 2023-06-08 16:55:01,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c95437b0a727e78f43bf7afc82f6d676, disabling compactions & flushes 2023-06-08 16:55:01,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:55:01,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:55:01,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. after waiting 0 ms 2023-06-08 16:55:01,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:55:01,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c95437b0a727e78f43bf7afc82f6d676 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-06-08 16:55:01,858 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/info/0c348a20a1104de0bf29a2328c01b5dd, entries=20, sequenceid=14, filesize=7.4 K 2023-06-08 16:55:01,862 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/.tmp/table/d82b02c26dc848bb9a10be78c76b3977 as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/table/d82b02c26dc848bb9a10be78c76b3977 2023-06-08 16:55:01,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/5cedf4ea5b124e639c057dc6abf65702 2023-06-08 16:55:01,870 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/table/d82b02c26dc848bb9a10be78c76b3977, entries=4, sequenceid=14, filesize=4.8 K 2023-06-08 16:55:01,871 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 80ms, sequenceid=14, compaction requested=false 2023-06-08 16:55:01,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/.tmp/info/5cedf4ea5b124e639c057dc6abf65702 as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/5cedf4ea5b124e639c057dc6abf65702 2023-06-08 16:55:01,881 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-08 16:55:01,882 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 16:55:01,883 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:55:01,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:55:01,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 16:55:01,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/5cedf4ea5b124e639c057dc6abf65702, entries=3, sequenceid=48, filesize=7.9 K 2023-06-08 16:55:01,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for c95437b0a727e78f43bf7afc82f6d676 in 35ms, sequenceid=48, compaction requested=true 2023-06-08 16:55:01,889 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/834d016733de455fa6dd6865adeb2276, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7154b2c42e3845df86d08d5101709155] to archive 2023-06-08 16:55:01,891 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 16:55:01,894 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/archive/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/04e3b96a568943c88c12632d3d49b94b 2023-06-08 16:55:01,896 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/834d016733de455fa6dd6865adeb2276 to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/archive/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/834d016733de455fa6dd6865adeb2276 2023-06-08 16:55:01,898 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7154b2c42e3845df86d08d5101709155 to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/archive/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/info/7154b2c42e3845df86d08d5101709155 2023-06-08 16:55:01,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/data/default/TestLogRolling-testSlowSyncLogRolling/c95437b0a727e78f43bf7afc82f6d676/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-06-08 16:55:01,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:55:01,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c95437b0a727e78f43bf7afc82f6d676: 2023-06-08 16:55:01,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1686243220239.c95437b0a727e78f43bf7afc82f6d676. 2023-06-08 16:55:01,992 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45311,1686243217721; all regions closed. 2023-06-08 16:55:01,996 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:55:02,010 DEBUG [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/oldWALs 2023-06-08 16:55:02,010 INFO [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C45311%2C1686243217721.meta:.meta(num 1686243219115) 2023-06-08 16:55:02,010 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/WALs/jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:55:02,021 DEBUG [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/oldWALs 2023-06-08 16:55:02,021 INFO [RS:0;jenkins-hbase20:45311] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C45311%2C1686243217721:(num 1686243276623) 2023-06-08 16:55:02,021 DEBUG [RS:0;jenkins-hbase20:45311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:02,021 INFO [RS:0;jenkins-hbase20:45311] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:55:02,022 INFO [RS:0;jenkins-hbase20:45311] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-08 16:55:02,022 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:55:02,023 INFO [RS:0;jenkins-hbase20:45311] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45311 2023-06-08 16:55:02,029 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45311,1686243217721 2023-06-08 16:55:02,029 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:02,029 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:02,030 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45311,1686243217721] 2023-06-08 16:55:02,031 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45311,1686243217721; numProcessing=1 2023-06-08 16:55:02,032 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45311,1686243217721 already deleted, retry=false 2023-06-08 16:55:02,032 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45311,1686243217721 expired; onlineServers=0 2023-06-08 16:55:02,032 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,37063,1686243216769' ***** 2023-06-08 16:55:02,032 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 16:55:02,032 DEBUG [M:0;jenkins-hbase20:37063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f860755, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:55:02,033 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:55:02,033 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37063,1686243216769; all regions closed. 2023-06-08 16:55:02,033 DEBUG [M:0;jenkins-hbase20:37063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:02,033 DEBUG [M:0;jenkins-hbase20:37063] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 16:55:02,033 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 16:55:02,034 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243218597] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243218597,5,FailOnTimeoutGroup] 2023-06-08 16:55:02,034 DEBUG [M:0;jenkins-hbase20:37063] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 16:55:02,034 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243218596] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243218596,5,FailOnTimeoutGroup] 2023-06-08 16:55:02,035 INFO [M:0;jenkins-hbase20:37063] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 16:55:02,035 INFO [M:0;jenkins-hbase20:37063] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 16:55:02,035 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 16:55:02,035 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:02,036 INFO [M:0;jenkins-hbase20:37063] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-08 16:55:02,036 DEBUG [M:0;jenkins-hbase20:37063] master.HMaster(1512): Stopping service threads 2023-06-08 16:55:02,036 INFO [M:0;jenkins-hbase20:37063] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 16:55:02,036 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:55:02,037 INFO [M:0;jenkins-hbase20:37063] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 16:55:02,037 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 16:55:02,037 DEBUG [M:0;jenkins-hbase20:37063] zookeeper.ZKUtil(398): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 16:55:02,037 WARN [M:0;jenkins-hbase20:37063] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 16:55:02,038 INFO [M:0;jenkins-hbase20:37063] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 16:55:02,038 INFO [M:0;jenkins-hbase20:37063] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 16:55:02,038 DEBUG [M:0;jenkins-hbase20:37063] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:55:02,038 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:02,038 DEBUG [M:0;jenkins-hbase20:37063] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:02,038 DEBUG [M:0;jenkins-hbase20:37063] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:55:02,038 DEBUG [M:0;jenkins-hbase20:37063] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:02,039 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-06-08 16:55:02,056 INFO [M:0;jenkins-hbase20:37063] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/81da3302b262425392f68b8d2a9d022d 2023-06-08 16:55:02,061 INFO [M:0;jenkins-hbase20:37063] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 81da3302b262425392f68b8d2a9d022d 2023-06-08 16:55:02,062 DEBUG [M:0;jenkins-hbase20:37063] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/81da3302b262425392f68b8d2a9d022d as hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/81da3302b262425392f68b8d2a9d022d 2023-06-08 16:55:02,069 INFO [M:0;jenkins-hbase20:37063] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 81da3302b262425392f68b8d2a9d022d 2023-06-08 16:55:02,069 INFO [M:0;jenkins-hbase20:37063] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/81da3302b262425392f68b8d2a9d022d, entries=11, sequenceid=100, filesize=6.1 K 2023-06-08 16:55:02,070 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=100, compaction requested=false 2023-06-08 16:55:02,072 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:02,072 DEBUG [M:0;jenkins-hbase20:37063] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:55:02,072 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/MasterData/WALs/jenkins-hbase20.apache.org,37063,1686243216769 2023-06-08 16:55:02,077 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:55:02,077 INFO [M:0;jenkins-hbase20:37063] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 16:55:02,077 INFO [M:0;jenkins-hbase20:37063] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37063 2023-06-08 16:55:02,079 DEBUG [M:0;jenkins-hbase20:37063] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,37063,1686243216769 already deleted, retry=false 2023-06-08 16:55:02,131 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:02,131 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): regionserver:45311-0x101cba3a2ae0001, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:02,131 INFO [RS:0;jenkins-hbase20:45311] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45311,1686243217721; zookeeper connection closed. 2023-06-08 16:55:02,132 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@15007953] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@15007953 2023-06-08 16:55:02,133 INFO [Listener at localhost.localdomain/38529] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 16:55:02,231 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:02,231 DEBUG [Listener at localhost.localdomain/38529-EventThread] zookeeper.ZKWatcher(600): master:37063-0x101cba3a2ae0000, quorum=127.0.0.1:62547, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:02,231 INFO [M:0;jenkins-hbase20:37063] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37063,1686243216769; zookeeper connection closed. 2023-06-08 16:55:02,236 WARN [Listener at localhost.localdomain/38529] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:02,242 INFO [Listener at localhost.localdomain/38529] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:02,355 WARN [BP-1509460338-148.251.75.209-1686243214216 heartbeating to localhost.localdomain/127.0.0.1:33111] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:02,355 WARN [BP-1509460338-148.251.75.209-1686243214216 heartbeating to localhost.localdomain/127.0.0.1:33111] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1509460338-148.251.75.209-1686243214216 (Datanode Uuid 1bd71afd-d1eb-4924-a704-10683bf11362) service to localhost.localdomain/127.0.0.1:33111 2023-06-08 16:55:02,358 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/dfs/data/data3/current/BP-1509460338-148.251.75.209-1686243214216] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:02,358 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/dfs/data/data4/current/BP-1509460338-148.251.75.209-1686243214216] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:02,359 WARN [Listener at localhost.localdomain/38529] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:02,361 INFO [Listener at localhost.localdomain/38529] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:02,471 WARN [BP-1509460338-148.251.75.209-1686243214216 heartbeating to localhost.localdomain/127.0.0.1:33111] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:02,472 WARN [BP-1509460338-148.251.75.209-1686243214216 heartbeating to localhost.localdomain/127.0.0.1:33111] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1509460338-148.251.75.209-1686243214216 (Datanode Uuid c8159548-fe81-41b3-8861-346c593825fa) service to localhost.localdomain/127.0.0.1:33111 2023-06-08 16:55:02,473 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/dfs/data/data1/current/BP-1509460338-148.251.75.209-1686243214216] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:02,474 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/cluster_c918d1ca-c7e8-de1e-245b-62226e28fc77/dfs/data/data2/current/BP-1509460338-148.251.75.209-1686243214216] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:02,507 INFO [Listener at localhost.localdomain/38529] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 16:55:02,624 INFO [Listener at localhost.localdomain/38529] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 16:55:02,658 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 16:55:02,669 INFO [Listener at localhost.localdomain/38529] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: regionserver/jenkins-hbase20:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/38529 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:33111 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:33111 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@270c0634 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:33111 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:33111 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:33111 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=441 (was 263) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=84 (was 204), ProcessCount=187 (was 187), AvailableMemoryMB=2325 (was 2898) 2023-06-08 16:55:02,679 INFO [Listener at localhost.localdomain/38529] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=441, MaxFileDescriptor=60000, SystemLoadAverage=84, ProcessCount=187, AvailableMemoryMB=2324 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/hadoop.log.dir so I do NOT create it in target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/949d5e58-dc5c-a036-e95d-f605f5fac75c/hadoop.tmp.dir so I do NOT create it in target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2, deleteOnExit=true 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/test.cache.data in system properties and HBase conf 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/hadoop.log.dir in system properties and HBase conf 2023-06-08 16:55:02,680 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 16:55:02,681 DEBUG [Listener at localhost.localdomain/38529] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 16:55:02,681 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/nfs.dump.dir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 16:55:02,682 INFO [Listener at localhost.localdomain/38529] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 16:55:02,684 WARN [Listener at localhost.localdomain/38529] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:55:02,685 WARN [Listener at localhost.localdomain/38529] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:55:02,685 WARN [Listener at localhost.localdomain/38529] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:55:02,712 WARN [Listener at localhost.localdomain/38529] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:02,714 INFO [Listener at localhost.localdomain/38529] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:02,719 INFO [Listener at localhost.localdomain/38529] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_localdomain_36779_hdfs____.9swv0w/webapp 2023-06-08 16:55:02,758 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:55:02,793 INFO [Listener at localhost.localdomain/38529] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36779 2023-06-08 16:55:02,794 WARN [Listener at localhost.localdomain/38529] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:55:02,795 WARN [Listener at localhost.localdomain/38529] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:55:02,795 WARN [Listener at localhost.localdomain/38529] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:55:02,824 WARN [Listener at localhost.localdomain/41115] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:02,834 WARN [Listener at localhost.localdomain/41115] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:02,837 WARN [Listener at localhost.localdomain/41115] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:02,838 INFO [Listener at localhost.localdomain/41115] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:02,844 INFO [Listener at localhost.localdomain/41115] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_33867_datanode____9pzzls/webapp 2023-06-08 16:55:02,917 INFO [Listener at localhost.localdomain/41115] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33867 2023-06-08 16:55:02,923 WARN [Listener at localhost.localdomain/32793] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:02,934 WARN [Listener at localhost.localdomain/32793] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:02,936 WARN [Listener at localhost.localdomain/32793] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:02,937 INFO [Listener at localhost.localdomain/32793] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:02,941 INFO [Listener at localhost.localdomain/32793] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_34189_datanode____df44ew/webapp 2023-06-08 16:55:03,000 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1b8fdc212c66419c: Processing first storage report for DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2 from datanode 77a4bf3c-b092-40e2-af83-20f2c71af9dd 2023-06-08 16:55:03,000 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1b8fdc212c66419c: from storage DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2 node DatanodeRegistration(127.0.0.1:35999, datanodeUuid=77a4bf3c-b092-40e2-af83-20f2c71af9dd, infoPort=39281, infoSecurePort=0, ipcPort=32793, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:03,000 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1b8fdc212c66419c: Processing first storage report for DS-3348ff7b-a57d-4acb-ab60-6cad00e32fe2 from datanode 77a4bf3c-b092-40e2-af83-20f2c71af9dd 2023-06-08 16:55:03,000 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1b8fdc212c66419c: from storage DS-3348ff7b-a57d-4acb-ab60-6cad00e32fe2 node DatanodeRegistration(127.0.0.1:35999, datanodeUuid=77a4bf3c-b092-40e2-af83-20f2c71af9dd, infoPort=39281, infoSecurePort=0, ipcPort=32793, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:03,022 INFO [Listener at localhost.localdomain/32793] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34189 2023-06-08 16:55:03,028 WARN [Listener at localhost.localdomain/46243] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:03,097 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeb90fbb511af130d: Processing first storage report for DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853 from datanode 467742bf-4c0e-4765-bdb0-37ffeff8ae1a 2023-06-08 16:55:03,097 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeb90fbb511af130d: from storage DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853 node DatanodeRegistration(127.0.0.1:41817, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=35971, infoSecurePort=0, ipcPort=46243, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:03,097 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeb90fbb511af130d: Processing first storage report for DS-5d17193c-ce0b-4889-9c71-95473055db96 from datanode 467742bf-4c0e-4765-bdb0-37ffeff8ae1a 2023-06-08 16:55:03,097 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeb90fbb511af130d: from storage DS-5d17193c-ce0b-4889-9c71-95473055db96 node DatanodeRegistration(127.0.0.1:41817, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=35971, infoSecurePort=0, ipcPort=46243, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:03,140 DEBUG [Listener at localhost.localdomain/46243] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4 2023-06-08 16:55:03,143 INFO [Listener at localhost.localdomain/46243] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/zookeeper_0, clientPort=53698, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 16:55:03,144 INFO [Listener at localhost.localdomain/46243] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53698 2023-06-08 16:55:03,145 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,146 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,166 INFO [Listener at localhost.localdomain/46243] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06 with version=8 2023-06-08 16:55:03,166 INFO [Listener at localhost.localdomain/46243] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase-staging 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:55:03,168 INFO [Listener at localhost.localdomain/46243] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:55:03,169 INFO [Listener at localhost.localdomain/46243] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44707 2023-06-08 16:55:03,170 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,171 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,172 INFO [Listener at localhost.localdomain/46243] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44707 connecting to ZooKeeper ensemble=127.0.0.1:53698 2023-06-08 16:55:03,178 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:447070x0, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:55:03,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44707-0x101cba4f7120000 connected 2023-06-08 16:55:03,190 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:55:03,191 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:03,191 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:55:03,197 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44707 2023-06-08 16:55:03,197 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44707 2023-06-08 16:55:03,198 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44707 2023-06-08 16:55:03,200 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44707 2023-06-08 16:55:03,200 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44707 2023-06-08 16:55:03,201 INFO [Listener at localhost.localdomain/46243] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06, hbase.cluster.distributed=false 2023-06-08 16:55:03,214 INFO [Listener at localhost.localdomain/46243] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:55:03,214 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:03,214 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:03,214 INFO [Listener at localhost.localdomain/46243] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:55:03,214 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:03,214 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:55:03,215 INFO [Listener at localhost.localdomain/46243] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:55:03,216 INFO [Listener at localhost.localdomain/46243] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38967 2023-06-08 16:55:03,216 INFO [Listener at localhost.localdomain/46243] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:55:03,217 DEBUG [Listener at localhost.localdomain/46243] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:55:03,218 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,219 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,220 INFO [Listener at localhost.localdomain/46243] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38967 connecting to ZooKeeper ensemble=127.0.0.1:53698 2023-06-08 16:55:03,223 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:389670x0, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:55:03,224 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): regionserver:389670x0, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:55:03,224 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38967-0x101cba4f7120001 connected 2023-06-08 16:55:03,225 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:03,225 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:55:03,226 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38967 2023-06-08 16:55:03,226 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38967 2023-06-08 16:55:03,226 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38967 2023-06-08 16:55:03,226 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38967 2023-06-08 16:55:03,227 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38967 2023-06-08 16:55:03,228 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,245 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:55:03,245 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,247 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:55:03,247 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:55:03,247 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,247 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:55:03,248 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44707,1686243303167 from backup master directory 2023-06-08 16:55:03,248 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:55:03,249 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,249 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:55:03,249 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:55:03,250 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,265 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/hbase.id with ID: c638f6b5-ac1e-4e3e-ae8f-4552f300975c 2023-06-08 16:55:03,277 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:03,280 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x06073e90 to 127.0.0.1:53698 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:03,296 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d5fd1aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:03,297 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:03,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 16:55:03,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:03,299 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store-tmp 2023-06-08 16:55:03,310 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:03,310 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:55:03,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:03,310 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:03,310 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:55:03,310 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:03,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:03,310 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:55:03,311 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44707%2C1686243303167, suffix=, logDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167, archiveDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/oldWALs, maxLogs=10 2023-06-08 16:55:03,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167/jenkins-hbase20.apache.org%2C44707%2C1686243303167.1686243303314 2023-06-08 16:55:03,322 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] 2023-06-08 16:55:03,322 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:03,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:03,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:03,323 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:03,325 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:03,328 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 16:55:03,328 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 16:55:03,329 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,330 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:03,331 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:03,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:03,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:03,338 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=836002, jitterRate=0.06303246319293976}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:55:03,338 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:55:03,338 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 16:55:03,340 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 16:55:03,340 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 16:55:03,340 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 16:55:03,341 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 16:55:03,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 16:55:03,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 16:55:03,343 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 16:55:03,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 16:55:03,355 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 16:55:03,355 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 16:55:03,356 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 16:55:03,356 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 16:55:03,357 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 16:55:03,358 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,359 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 16:55:03,360 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 16:55:03,360 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 16:55:03,361 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:03,361 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:03,361 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,362 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44707,1686243303167, sessionid=0x101cba4f7120000, setting cluster-up flag (Was=false) 2023-06-08 16:55:03,366 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,369 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 16:55:03,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,373 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,376 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 16:55:03,377 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:03,377 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.hbase-snapshot/.tmp 2023-06-08 16:55:03,380 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 16:55:03,380 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:55:03,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,382 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686243333382 2023-06-08 16:55:03,383 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 16:55:03,383 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 16:55:03,383 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 16:55:03,383 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 16:55:03,383 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 16:55:03,383 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 16:55:03,386 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,386 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:55:03,386 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 16:55:03,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 16:55:03,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 16:55:03,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 16:55:03,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 16:55:03,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 16:55:03,388 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243303388,5,FailOnTimeoutGroup] 2023-06-08 16:55:03,388 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243303388,5,FailOnTimeoutGroup] 2023-06-08 16:55:03,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,388 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:03,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 16:55:03,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,389 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,402 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:03,403 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:03,403 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06 2023-06-08 16:55:03,414 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:03,415 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:55:03,417 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/info 2023-06-08 16:55:03,418 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:55:03,419 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,419 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:55:03,420 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:55:03,421 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:55:03,421 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,422 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:55:03,423 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/table 2023-06-08 16:55:03,424 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:55:03,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,425 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740 2023-06-08 16:55:03,426 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740 2023-06-08 16:55:03,429 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(951): ClusterId : c638f6b5-ac1e-4e3e-ae8f-4552f300975c 2023-06-08 16:55:03,430 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:55:03,431 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:55:03,432 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:55:03,432 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:55:03,432 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:55:03,434 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:55:03,435 DEBUG [RS:0;jenkins-hbase20:38967] zookeeper.ReadOnlyZKClient(139): Connect 0x756abff9 to 127.0.0.1:53698 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:03,436 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:03,437 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=809644, jitterRate=0.029516443610191345}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:55:03,437 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:55:03,437 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:55:03,438 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:55:03,438 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:55:03,438 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:55:03,438 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:55:03,438 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:55:03,439 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:55:03,439 DEBUG [RS:0;jenkins-hbase20:38967] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bad4f24, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:03,439 DEBUG [RS:0;jenkins-hbase20:38967] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f92d489, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:55:03,440 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:55:03,440 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 16:55:03,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 16:55:03,442 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 16:55:03,444 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 16:55:03,448 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:38967 2023-06-08 16:55:03,448 INFO [RS:0;jenkins-hbase20:38967] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:55:03,448 INFO [RS:0;jenkins-hbase20:38967] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:55:03,448 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:55:03,449 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44707,1686243303167 with isa=jenkins-hbase20.apache.org/148.251.75.209:38967, startcode=1686243303213 2023-06-08 16:55:03,449 DEBUG [RS:0;jenkins-hbase20:38967] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:55:03,453 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34875, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:55:03,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,455 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06 2023-06-08 16:55:03,455 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41115 2023-06-08 16:55:03,455 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:55:03,457 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:03,458 DEBUG [RS:0;jenkins-hbase20:38967] zookeeper.ZKUtil(162): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,458 WARN [RS:0;jenkins-hbase20:38967] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:55:03,458 INFO [RS:0;jenkins-hbase20:38967] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:03,458 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38967,1686243303213] 2023-06-08 16:55:03,458 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,462 DEBUG [RS:0;jenkins-hbase20:38967] zookeeper.ZKUtil(162): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,463 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:55:03,463 INFO [RS:0;jenkins-hbase20:38967] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:55:03,466 INFO [RS:0;jenkins-hbase20:38967] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:55:03,468 INFO [RS:0;jenkins-hbase20:38967] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:55:03,468 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,468 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:55:03,469 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,469 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,470 DEBUG [RS:0;jenkins-hbase20:38967] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:03,471 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,471 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,471 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,480 INFO [RS:0;jenkins-hbase20:38967] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:55:03,480 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38967,1686243303213-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,491 INFO [RS:0;jenkins-hbase20:38967] regionserver.Replication(203): jenkins-hbase20.apache.org,38967,1686243303213 started 2023-06-08 16:55:03,491 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38967,1686243303213, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38967, sessionid=0x101cba4f7120001 2023-06-08 16:55:03,491 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:55:03,491 DEBUG [RS:0;jenkins-hbase20:38967] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,491 DEBUG [RS:0;jenkins-hbase20:38967] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38967,1686243303213' 2023-06-08 16:55:03,491 DEBUG [RS:0;jenkins-hbase20:38967] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:55:03,492 DEBUG [RS:0;jenkins-hbase20:38967] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:55:03,492 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:55:03,492 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:55:03,492 DEBUG [RS:0;jenkins-hbase20:38967] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,492 DEBUG [RS:0;jenkins-hbase20:38967] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38967,1686243303213' 2023-06-08 16:55:03,492 DEBUG [RS:0;jenkins-hbase20:38967] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:55:03,493 DEBUG [RS:0;jenkins-hbase20:38967] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:55:03,493 DEBUG [RS:0;jenkins-hbase20:38967] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:55:03,493 INFO [RS:0;jenkins-hbase20:38967] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:55:03,493 INFO [RS:0;jenkins-hbase20:38967] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:55:03,594 DEBUG [jenkins-hbase20:44707] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 16:55:03,595 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38967,1686243303213, state=OPENING 2023-06-08 16:55:03,596 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 16:55:03,596 INFO [RS:0;jenkins-hbase20:38967] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38967%2C1686243303213, suffix=, logDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213, archiveDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/oldWALs, maxLogs=32 2023-06-08 16:55:03,597 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:03,597 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:55:03,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38967,1686243303213}] 2023-06-08 16:55:03,612 INFO [RS:0;jenkins-hbase20:38967] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.1686243303599 2023-06-08 16:55:03,612 DEBUG [RS:0;jenkins-hbase20:38967] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] 2023-06-08 16:55:03,753 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:03,753 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:55:03,756 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34250, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:55:03,763 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 16:55:03,763 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:03,767 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38967%2C1686243303213.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213, archiveDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/oldWALs, maxLogs=32 2023-06-08 16:55:03,784 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.meta.1686243303770.meta 2023-06-08 16:55:03,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK], DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]] 2023-06-08 16:55:03,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:03,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 16:55:03,784 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 16:55:03,785 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 16:55:03,785 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 16:55:03,785 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:03,785 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 16:55:03,785 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 16:55:03,787 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:55:03,789 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/info 2023-06-08 16:55:03,789 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/info 2023-06-08 16:55:03,790 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:55:03,791 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,791 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:55:03,792 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:55:03,792 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:55:03,793 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:55:03,793 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,794 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:55:03,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/table 2023-06-08 16:55:03,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740/table 2023-06-08 16:55:03,796 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:55:03,797 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:03,799 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740 2023-06-08 16:55:03,800 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/meta/1588230740 2023-06-08 16:55:03,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:55:03,806 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:55:03,808 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=792215, jitterRate=0.007353559136390686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:55:03,808 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:55:03,810 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686243303753 2023-06-08 16:55:03,815 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 16:55:03,815 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 16:55:03,816 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38967,1686243303213, state=OPEN 2023-06-08 16:55:03,818 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 16:55:03,818 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:55:03,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 16:55:03,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38967,1686243303213 in 221 msec 2023-06-08 16:55:03,826 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 16:55:03,826 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 382 msec 2023-06-08 16:55:03,830 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 449 msec 2023-06-08 16:55:03,830 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686243303830, completionTime=-1 2023-06-08 16:55:03,830 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 16:55:03,831 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 16:55:03,833 DEBUG [hconnection-0xf8f214a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:55:03,836 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34254, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:55:03,837 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 16:55:03,837 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686243363837 2023-06-08 16:55:03,837 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686243423837 2023-06-08 16:55:03,837 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-08 16:55:03,842 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44707,1686243303167-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,842 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44707,1686243303167-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,842 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44707,1686243303167-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44707, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:03,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 16:55:03,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:03,844 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 16:55:03,845 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 16:55:03,847 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:55:03,848 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:55:03,850 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:03,851 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b empty. 2023-06-08 16:55:03,851 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:03,851 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 16:55:03,871 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:03,872 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4ca15ea9ab6b9efd109f2ba06e32576b, NAME => 'hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp 2023-06-08 16:55:03,887 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:03,887 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4ca15ea9ab6b9efd109f2ba06e32576b, disabling compactions & flushes 2023-06-08 16:55:03,887 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:03,887 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:03,887 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. after waiting 0 ms 2023-06-08 16:55:03,887 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:03,887 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:03,887 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4ca15ea9ab6b9efd109f2ba06e32576b: 2023-06-08 16:55:03,891 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:55:03,893 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243303892"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243303892"}]},"ts":"1686243303892"} 2023-06-08 16:55:03,896 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:55:03,897 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:55:03,898 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243303897"}]},"ts":"1686243303897"} 2023-06-08 16:55:03,900 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 16:55:03,903 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4ca15ea9ab6b9efd109f2ba06e32576b, ASSIGN}] 2023-06-08 16:55:03,906 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4ca15ea9ab6b9efd109f2ba06e32576b, ASSIGN 2023-06-08 16:55:03,907 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4ca15ea9ab6b9efd109f2ba06e32576b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38967,1686243303213; forceNewPlan=false, retain=false 2023-06-08 16:55:04,058 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4ca15ea9ab6b9efd109f2ba06e32576b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:04,059 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243304058"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243304058"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243304058"}]},"ts":"1686243304058"} 2023-06-08 16:55:04,064 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 4ca15ea9ab6b9efd109f2ba06e32576b, server=jenkins-hbase20.apache.org,38967,1686243303213}] 2023-06-08 16:55:04,230 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:04,230 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4ca15ea9ab6b9efd109f2ba06e32576b, NAME => 'hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:04,230 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,231 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:04,231 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,231 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,232 INFO [StoreOpener-4ca15ea9ab6b9efd109f2ba06e32576b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,234 DEBUG [StoreOpener-4ca15ea9ab6b9efd109f2ba06e32576b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b/info 2023-06-08 16:55:04,234 DEBUG [StoreOpener-4ca15ea9ab6b9efd109f2ba06e32576b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b/info 2023-06-08 16:55:04,235 INFO [StoreOpener-4ca15ea9ab6b9efd109f2ba06e32576b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4ca15ea9ab6b9efd109f2ba06e32576b columnFamilyName info 2023-06-08 16:55:04,235 INFO [StoreOpener-4ca15ea9ab6b9efd109f2ba06e32576b-1] regionserver.HStore(310): Store=4ca15ea9ab6b9efd109f2ba06e32576b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:04,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:04,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/hbase/namespace/4ca15ea9ab6b9efd109f2ba06e32576b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:04,244 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 4ca15ea9ab6b9efd109f2ba06e32576b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=827808, jitterRate=0.05261304974555969}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:55:04,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 4ca15ea9ab6b9efd109f2ba06e32576b: 2023-06-08 16:55:04,246 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b., pid=6, masterSystemTime=1686243304219 2023-06-08 16:55:04,249 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:04,249 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:04,249 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4ca15ea9ab6b9efd109f2ba06e32576b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:04,250 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243304249"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243304249"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243304249"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243304249"}]},"ts":"1686243304249"} 2023-06-08 16:55:04,254 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 16:55:04,254 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 4ca15ea9ab6b9efd109f2ba06e32576b, server=jenkins-hbase20.apache.org,38967,1686243303213 in 188 msec 2023-06-08 16:55:04,257 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 16:55:04,258 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4ca15ea9ab6b9efd109f2ba06e32576b, ASSIGN in 351 msec 2023-06-08 16:55:04,259 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:55:04,259 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243304259"}]},"ts":"1686243304259"} 2023-06-08 16:55:04,261 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 16:55:04,263 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:55:04,265 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 420 msec 2023-06-08 16:55:04,347 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 16:55:04,349 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:55:04,349 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:04,360 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 16:55:04,371 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:55:04,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-06-08 16:55:04,383 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 16:55:04,397 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:55:04,404 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-06-08 16:55:04,418 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 16:55:04,420 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 16:55:04,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.170sec 2023-06-08 16:55:04,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 16:55:04,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 16:55:04,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 16:55:04,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44707,1686243303167-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 16:55:04,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44707,1686243303167-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 16:55:04,423 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 16:55:04,429 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ReadOnlyZKClient(139): Connect 0x443da58d to 127.0.0.1:53698 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:04,435 DEBUG [Listener at localhost.localdomain/46243] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a6f9e3c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:04,437 DEBUG [hconnection-0x1f7bbe7e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:55:04,441 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34258, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:55:04,444 INFO [Listener at localhost.localdomain/46243] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:04,444 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:04,447 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 16:55:04,447 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:04,448 INFO [Listener at localhost.localdomain/46243] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 16:55:04,461 INFO [Listener at localhost.localdomain/46243] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:55:04,461 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:04,461 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:04,461 INFO [Listener at localhost.localdomain/46243] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:55:04,461 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:04,462 INFO [Listener at localhost.localdomain/46243] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:55:04,462 INFO [Listener at localhost.localdomain/46243] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:55:04,463 INFO [Listener at localhost.localdomain/46243] ipc.NettyRpcServer(120): Bind to /148.251.75.209:41407 2023-06-08 16:55:04,464 INFO [Listener at localhost.localdomain/46243] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:55:04,464 DEBUG [Listener at localhost.localdomain/46243] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:55:04,465 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:04,466 INFO [Listener at localhost.localdomain/46243] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:04,467 INFO [Listener at localhost.localdomain/46243] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41407 connecting to ZooKeeper ensemble=127.0.0.1:53698 2023-06-08 16:55:04,469 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:414070x0, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:55:04,470 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(162): regionserver:414070x0, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:55:04,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41407-0x101cba4f7120005 connected 2023-06-08 16:55:04,472 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(162): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-06-08 16:55:04,472 DEBUG [Listener at localhost.localdomain/46243] zookeeper.ZKUtil(164): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:55:04,473 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41407 2023-06-08 16:55:04,473 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41407 2023-06-08 16:55:04,473 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41407 2023-06-08 16:55:04,474 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41407 2023-06-08 16:55:04,474 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41407 2023-06-08 16:55:04,478 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(951): ClusterId : c638f6b5-ac1e-4e3e-ae8f-4552f300975c 2023-06-08 16:55:04,479 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:55:04,481 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:55:04,481 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:55:04,483 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:55:04,484 DEBUG [RS:1;jenkins-hbase20:41407] zookeeper.ReadOnlyZKClient(139): Connect 0x6c0872ca to 127.0.0.1:53698 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:04,494 DEBUG [RS:1;jenkins-hbase20:41407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62cf813b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:04,494 DEBUG [RS:1;jenkins-hbase20:41407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66f019ba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:55:04,504 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:41407 2023-06-08 16:55:04,505 INFO [RS:1;jenkins-hbase20:41407] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:55:04,505 INFO [RS:1;jenkins-hbase20:41407] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:55:04,505 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:55:04,506 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44707,1686243303167 with isa=jenkins-hbase20.apache.org/148.251.75.209:41407, startcode=1686243304460 2023-06-08 16:55:04,506 DEBUG [RS:1;jenkins-hbase20:41407] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:55:04,509 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52709, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:55:04,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,510 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06 2023-06-08 16:55:04,510 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41115 2023-06-08 16:55:04,510 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:55:04,511 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:04,511 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:04,511 DEBUG [RS:1;jenkins-hbase20:41407] zookeeper.ZKUtil(162): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,511 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,41407,1686243304460] 2023-06-08 16:55:04,512 WARN [RS:1;jenkins-hbase20:41407] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:55:04,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:04,512 INFO [RS:1;jenkins-hbase20:41407] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:04,512 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,517 DEBUG [RS:1;jenkins-hbase20:41407] zookeeper.ZKUtil(162): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:04,518 DEBUG [RS:1;jenkins-hbase20:41407] zookeeper.ZKUtil(162): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,519 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:55:04,519 INFO [RS:1;jenkins-hbase20:41407] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:55:04,522 INFO [RS:1;jenkins-hbase20:41407] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:55:04,523 INFO [RS:1;jenkins-hbase20:41407] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:55:04,523 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:04,523 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:55:04,524 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:04,524 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,525 DEBUG [RS:1;jenkins-hbase20:41407] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:04,527 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:04,527 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:04,527 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:04,537 INFO [RS:1;jenkins-hbase20:41407] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:55:04,537 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41407,1686243304460-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:04,547 INFO [RS:1;jenkins-hbase20:41407] regionserver.Replication(203): jenkins-hbase20.apache.org,41407,1686243304460 started 2023-06-08 16:55:04,547 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,41407,1686243304460, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:41407, sessionid=0x101cba4f7120005 2023-06-08 16:55:04,547 INFO [Listener at localhost.localdomain/46243] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase20:41407,5,FailOnTimeoutGroup] 2023-06-08 16:55:04,547 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:55:04,547 INFO [Listener at localhost.localdomain/46243] wal.TestLogRolling(323): Replication=2 2023-06-08 16:55:04,547 DEBUG [RS:1;jenkins-hbase20:41407] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,548 DEBUG [RS:1;jenkins-hbase20:41407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41407,1686243304460' 2023-06-08 16:55:04,549 DEBUG [RS:1;jenkins-hbase20:41407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:55:04,549 DEBUG [RS:1;jenkins-hbase20:41407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:55:04,551 DEBUG [Listener at localhost.localdomain/46243] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 16:55:04,551 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:55:04,551 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:55:04,551 DEBUG [RS:1;jenkins-hbase20:41407] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,552 DEBUG [RS:1;jenkins-hbase20:41407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41407,1686243304460' 2023-06-08 16:55:04,552 DEBUG [RS:1;jenkins-hbase20:41407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:55:04,553 DEBUG [RS:1;jenkins-hbase20:41407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:55:04,553 DEBUG [RS:1;jenkins-hbase20:41407] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:55:04,553 INFO [RS:1;jenkins-hbase20:41407] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:55:04,554 INFO [RS:1;jenkins-hbase20:41407] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:55:04,555 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 16:55:04,556 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 16:55:04,556 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 16:55:04,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:04,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-06-08 16:55:04,561 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:55:04,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-06-08 16:55:04,562 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:55:04,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:55:04,564 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,564 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d empty. 2023-06-08 16:55:04,565 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,565 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-06-08 16:55:04,581 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:04,583 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0f108deba6795807c04cf20a4ad86d1d, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/.tmp 2023-06-08 16:55:04,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:04,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 0f108deba6795807c04cf20a4ad86d1d, disabling compactions & flushes 2023-06-08 16:55:04,593 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. after waiting 0 ms 2023-06-08 16:55:04,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,593 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 0f108deba6795807c04cf20a4ad86d1d: 2023-06-08 16:55:04,596 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:55:04,599 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686243304598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243304598"}]},"ts":"1686243304598"} 2023-06-08 16:55:04,601 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:55:04,602 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:55:04,602 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243304602"}]},"ts":"1686243304602"} 2023-06-08 16:55:04,604 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-06-08 16:55:04,610 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-06-08 16:55:04,612 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-06-08 16:55:04,612 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-06-08 16:55:04,612 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-06-08 16:55:04,613 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0f108deba6795807c04cf20a4ad86d1d, ASSIGN}] 2023-06-08 16:55:04,615 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0f108deba6795807c04cf20a4ad86d1d, ASSIGN 2023-06-08 16:55:04,616 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0f108deba6795807c04cf20a4ad86d1d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,41407,1686243304460; forceNewPlan=false, retain=false 2023-06-08 16:55:04,658 INFO [RS:1;jenkins-hbase20:41407] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41407%2C1686243304460, suffix=, logDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460, archiveDir=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/oldWALs, maxLogs=32 2023-06-08 16:55:04,676 INFO [RS:1;jenkins-hbase20:41407] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662 2023-06-08 16:55:04,677 DEBUG [RS:1;jenkins-hbase20:41407] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] 2023-06-08 16:55:04,772 INFO [jenkins-hbase20:44707] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-06-08 16:55:04,776 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0f108deba6795807c04cf20a4ad86d1d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,776 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686243304775"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243304775"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243304775"}]},"ts":"1686243304775"} 2023-06-08 16:55:04,780 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 0f108deba6795807c04cf20a4ad86d1d, server=jenkins-hbase20.apache.org,41407,1686243304460}] 2023-06-08 16:55:04,934 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,935 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:55:04,940 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:55:04,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f108deba6795807c04cf20a4ad86d1d, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:04,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:04,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,949 INFO [StoreOpener-0f108deba6795807c04cf20a4ad86d1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,952 DEBUG [StoreOpener-0f108deba6795807c04cf20a4ad86d1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info 2023-06-08 16:55:04,952 DEBUG [StoreOpener-0f108deba6795807c04cf20a4ad86d1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info 2023-06-08 16:55:04,953 INFO [StoreOpener-0f108deba6795807c04cf20a4ad86d1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f108deba6795807c04cf20a4ad86d1d columnFamilyName info 2023-06-08 16:55:04,953 INFO [StoreOpener-0f108deba6795807c04cf20a4ad86d1d-1] regionserver.HStore(310): Store=0f108deba6795807c04cf20a4ad86d1d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:04,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:04,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:04,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0f108deba6795807c04cf20a4ad86d1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=858773, jitterRate=0.09198753535747528}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:55:04,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0f108deba6795807c04cf20a4ad86d1d: 2023-06-08 16:55:04,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d., pid=11, masterSystemTime=1686243304934 2023-06-08 16:55:04,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:04,975 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0f108deba6795807c04cf20a4ad86d1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:04,975 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1686243304975"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243304975"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243304975"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243304975"}]},"ts":"1686243304975"} 2023-06-08 16:55:04,980 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 16:55:04,980 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 0f108deba6795807c04cf20a4ad86d1d, server=jenkins-hbase20.apache.org,41407,1686243304460 in 197 msec 2023-06-08 16:55:04,983 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 16:55:04,984 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0f108deba6795807c04cf20a4ad86d1d, ASSIGN in 367 msec 2023-06-08 16:55:04,985 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:55:04,986 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243304986"}]},"ts":"1686243304986"} 2023-06-08 16:55:04,987 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-06-08 16:55:04,990 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:55:04,992 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 433 msec 2023-06-08 16:55:07,390 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 16:55:09,464 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 16:55:09,466 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 16:55:10,519 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-06-08 16:55:14,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:55:14,567 INFO [Listener at localhost.localdomain/46243] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-06-08 16:55:14,576 DEBUG [Listener at localhost.localdomain/46243] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-06-08 16:55:14,576 DEBUG [Listener at localhost.localdomain/46243] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:14,587 WARN [Listener at localhost.localdomain/46243] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:14,590 WARN [Listener at localhost.localdomain/46243] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:14,591 INFO [Listener at localhost.localdomain/46243] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:14,595 INFO [Listener at localhost.localdomain/46243] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_42251_datanode____.jfz0p/webapp 2023-06-08 16:55:14,669 INFO [Listener at localhost.localdomain/46243] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42251 2023-06-08 16:55:14,678 WARN [Listener at localhost.localdomain/37785] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:14,690 WARN [Listener at localhost.localdomain/37785] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:14,692 WARN [Listener at localhost.localdomain/37785] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:14,694 INFO [Listener at localhost.localdomain/37785] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:14,698 INFO [Listener at localhost.localdomain/37785] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_36603_datanode____43q4pp/webapp 2023-06-08 16:55:14,772 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2e733a8fc5efefd5: Processing first storage report for DS-17c57e98-0b7d-4ece-9449-2d30299f0570 from datanode b890f9f5-138c-46e0-afe6-628a77aa48ee 2023-06-08 16:55:14,772 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2e733a8fc5efefd5: from storage DS-17c57e98-0b7d-4ece-9449-2d30299f0570 node DatanodeRegistration(127.0.0.1:45233, datanodeUuid=b890f9f5-138c-46e0-afe6-628a77aa48ee, infoPort=42421, infoSecurePort=0, ipcPort=37785, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:14,772 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2e733a8fc5efefd5: Processing first storage report for DS-ff01a254-1629-45ab-bc27-2a31f55915be from datanode b890f9f5-138c-46e0-afe6-628a77aa48ee 2023-06-08 16:55:14,772 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2e733a8fc5efefd5: from storage DS-ff01a254-1629-45ab-bc27-2a31f55915be node DatanodeRegistration(127.0.0.1:45233, datanodeUuid=b890f9f5-138c-46e0-afe6-628a77aa48ee, infoPort=42421, infoSecurePort=0, ipcPort=37785, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:55:14,839 INFO [Listener at localhost.localdomain/37785] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36603 2023-06-08 16:55:14,851 WARN [Listener at localhost.localdomain/37361] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:14,872 WARN [Listener at localhost.localdomain/37361] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:14,875 WARN [Listener at localhost.localdomain/37361] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:14,876 INFO [Listener at localhost.localdomain/37361] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:14,885 INFO [Listener at localhost.localdomain/37361] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_36427_datanode____.av0a1v/webapp 2023-06-08 16:55:14,944 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbbd215f48e067f45: Processing first storage report for DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1 from datanode e6158fea-dd42-444a-b5de-819955de734b 2023-06-08 16:55:14,944 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbbd215f48e067f45: from storage DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1 node DatanodeRegistration(127.0.0.1:45209, datanodeUuid=e6158fea-dd42-444a-b5de-819955de734b, infoPort=36025, infoSecurePort=0, ipcPort=37361, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:55:14,944 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbbd215f48e067f45: Processing first storage report for DS-619dd4b8-0af9-4d86-8148-57f54e8ade06 from datanode e6158fea-dd42-444a-b5de-819955de734b 2023-06-08 16:55:14,944 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbbd215f48e067f45: from storage DS-619dd4b8-0af9-4d86-8148-57f54e8ade06 node DatanodeRegistration(127.0.0.1:45209, datanodeUuid=e6158fea-dd42-444a-b5de-819955de734b, infoPort=36025, infoSecurePort=0, ipcPort=37361, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:14,970 INFO [Listener at localhost.localdomain/37361] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36427 2023-06-08 16:55:14,977 WARN [Listener at localhost.localdomain/45599] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:15,059 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x26b6b5baff9a48b8: Processing first storage report for DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd from datanode d44ed0d1-e392-47a1-b316-4d77acdde630 2023-06-08 16:55:15,059 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x26b6b5baff9a48b8: from storage DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd node DatanodeRegistration(127.0.0.1:42053, datanodeUuid=d44ed0d1-e392-47a1-b316-4d77acdde630, infoPort=34077, infoSecurePort=0, ipcPort=45599, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:15,059 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x26b6b5baff9a48b8: Processing first storage report for DS-384b4e4d-574f-49ca-907a-9e8d4ce68225 from datanode d44ed0d1-e392-47a1-b316-4d77acdde630 2023-06-08 16:55:15,059 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x26b6b5baff9a48b8: from storage DS-384b4e4d-574f-49ca-907a-9e8d4ce68225 node DatanodeRegistration(127.0.0.1:42053, datanodeUuid=d44ed0d1-e392-47a1-b316-4d77acdde630, infoPort=34077, infoSecurePort=0, ipcPort=45599, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:15,084 WARN [Listener at localhost.localdomain/45599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:15,086 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,087 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,087 WARN [DataStreamer for file /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167/jenkins-hbase20.apache.org%2C44707%2C1686243303167.1686243303314 block BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]) is bad. 2023-06-08 16:55:15,087 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,089 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 16:55:15,088 WARN [DataStreamer for file /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662 block BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]) is bad. 2023-06-08 16:55:15,089 WARN [DataStreamer for file /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.meta.1686243303770.meta block BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK], DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]) is bad. 2023-06-08 16:55:15,089 WARN [DataStreamer for file /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.1686243303599 block BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]) is bad. 2023-06-08 16:55:15,089 WARN [PacketResponder: BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41817]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,097 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:15,098 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1723220111-148.251.75.209-1686243302687 (Datanode Uuid 467742bf-4c0e-4765-bdb0-37ffeff8ae1a) service to localhost.localdomain/127.0.0.1:41115 2023-06-08 16:55:15,099 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data3/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:15,100 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data4/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:15,101 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:45970 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45970 dst: /127.0.0.1:35999 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,104 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:45960 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45960 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35999 remote=/127.0.0.1:45960]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,105 WARN [PacketResponder: BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35999]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,106 WARN [PacketResponder: BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35999]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,105 WARN [PacketResponder: BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35999]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,105 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:46026 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46026 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35999 remote=/127.0.0.1:46026]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,105 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1365030196_17 at /127.0.0.1:45922 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45922 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35999 remote=/127.0.0.1:45922]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,106 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:40240 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41817:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40240 dst: /127.0.0.1:41817 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,109 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:40288 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:41817:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40288 dst: /127.0.0.1:41817 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,109 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1365030196_17 at /127.0.0.1:40208 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41817:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40208 dst: /127.0.0.1:41817 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,205 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:40256 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41817:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40256 dst: /127.0.0.1:41817 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,209 WARN [Listener at localhost.localdomain/45599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:15,210 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,211 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,211 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,211 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:15,219 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:15,324 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1365030196_17 at /127.0.0.1:47092 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47092 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,329 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:15,328 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:47094 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47094 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,327 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:47096 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47096 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,326 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:47090 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35999:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47090 dst: /127.0.0.1:35999 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:15,330 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1723220111-148.251.75.209-1686243302687 (Datanode Uuid 77a4bf3c-b092-40e2-af83-20f2c71af9dd) service to localhost.localdomain/127.0.0.1:41115 2023-06-08 16:55:15,334 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data1/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:15,335 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data2/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:15,342 DEBUG [Listener at localhost.localdomain/45599] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:55:15,345 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44408, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:55:15,346 WARN [RS:1;jenkins-hbase20:41407.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:15,346 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41407%2C1686243304460:(num 1686243304662) roll requested 2023-06-08 16:55:15,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41407] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:15,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41407] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:44408 deadline: 1686243325345, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-08 16:55:15,358 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-06-08 16:55:15,359 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 2023-06-08 16:55:15,361 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK], DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK]] 2023-06-08 16:55:15,361 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:15,361 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662 is not closed yet, will try archiving it next time 2023-06-08 16:55:15,361 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:15,361 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662 to hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/oldWALs/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243304662 2023-06-08 16:55:27,473 INFO [Listener at localhost.localdomain/45599] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 2023-06-08 16:55:27,474 WARN [Listener at localhost.localdomain/45599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:27,476 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:27,477 WARN [DataStreamer for file /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 block BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK], DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK]) is bad. 2023-06-08 16:55:27,484 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:27,486 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:55546 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:42053:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55546 dst: /127.0.0.1:42053 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:42053 remote=/127.0.0.1:55546]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:27,487 WARN [PacketResponder: BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42053]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:27,488 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:40700 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:45209:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40700 dst: /127.0.0.1:45209 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:27,596 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:27,596 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1723220111-148.251.75.209-1686243302687 (Datanode Uuid e6158fea-dd42-444a-b5de-819955de734b) service to localhost.localdomain/127.0.0.1:41115 2023-06-08 16:55:27,597 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data7/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:27,597 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data8/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:27,604 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK]] 2023-06-08 16:55:27,604 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK]] 2023-06-08 16:55:27,604 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41407%2C1686243304460:(num 1686243315347) roll requested 2023-06-08 16:55:27,608 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741840_1021 2023-06-08 16:55:27,610 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK] 2023-06-08 16:55:27,621 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243327604 2023-06-08 16:55:27,621 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK], DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:27,621 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 is not closed yet, will try archiving it next time 2023-06-08 16:55:30,082 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4671db5] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:42053, datanodeUuid=d44ed0d1-e392-47a1-b316-4d77acdde630, infoPort=34077, infoSecurePort=0, ipcPort=45599, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741839_1020 to 127.0.0.1:35999 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,612 WARN [Listener at localhost.localdomain/45599] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:31,615 WARN [ResponseProcessor for block BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:55:31,616 WARN [DataStreamer for file /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243327604 block BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022] hdfs.DataStreamer(1548): Error Recovery for BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK], DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK]) is bad. 2023-06-08 16:55:31,624 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:33434 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:45233:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33434 dst: /127.0.0.1:45233 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45233 remote=/127.0.0.1:33434]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,624 WARN [PacketResponder: BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45233]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,625 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:31,626 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:53880 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:42053:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53880 dst: /127.0.0.1:42053 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,739 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:31,739 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1723220111-148.251.75.209-1686243302687 (Datanode Uuid d44ed0d1-e392-47a1-b316-4d77acdde630) service to localhost.localdomain/127.0.0.1:41115 2023-06-08 16:55:31,740 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data9/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:31,741 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data10/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:31,746 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:31,746 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:31,747 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41407%2C1686243304460:(num 1686243327604) roll requested 2023-06-08 16:55:31,750 WARN [Thread-648] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741842_1024 2023-06-08 16:55:31,751 WARN [Thread-648] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:31,753 WARN [Thread-648] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741843_1025 2023-06-08 16:55:31,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41407] regionserver.HRegion(9158): Flush requested on 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:31,754 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0f108deba6795807c04cf20a4ad86d1d 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:55:31,754 WARN [Thread-648] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:31,759 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34562 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741844_1026]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data6/current]'}, localName='127.0.0.1:45233', datanodeUuid='b890f9f5-138c-46e0-afe6-628a77aa48ee', xmitsInProgress=0}:Exception transfering block BP-1723220111-148.251.75.209-1686243302687:blk_1073741844_1026 to mirror 127.0.0.1:41817: java.net.ConnectException: Connection refused 2023-06-08 16:55:31,759 WARN [Thread-648] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741844_1026 2023-06-08 16:55:31,759 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34562 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741844_1026]] datanode.DataXceiver(323): 127.0.0.1:45233:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34562 dst: /127.0.0.1:45233 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,759 WARN [Thread-648] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK] 2023-06-08 16:55:31,762 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34574 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741845_1027]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data6/current]'}, localName='127.0.0.1:45233', datanodeUuid='b890f9f5-138c-46e0-afe6-628a77aa48ee', xmitsInProgress=0}:Exception transfering block BP-1723220111-148.251.75.209-1686243302687:blk_1073741845_1027 to mirror 127.0.0.1:35999: java.net.ConnectException: Connection refused 2023-06-08 16:55:31,762 WARN [Thread-648] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741845_1027 2023-06-08 16:55:31,763 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34574 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741845_1027]] datanode.DataXceiver(323): 127.0.0.1:45233:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34574 dst: /127.0.0.1:45233 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,763 WARN [Thread-648] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK] 2023-06-08 16:55:31,764 WARN [IPC Server handler 0 on default port 41115] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-08 16:55:31,764 WARN [IPC Server handler 0 on default port 41115] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-08 16:55:31,764 WARN [IPC Server handler 0 on default port 41115] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-08 16:55:31,765 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741846_1028 2023-06-08 16:55:31,766 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:31,772 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34592 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741848_1030]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data6/current]'}, localName='127.0.0.1:45233', datanodeUuid='b890f9f5-138c-46e0-afe6-628a77aa48ee', xmitsInProgress=0}:Exception transfering block BP-1723220111-148.251.75.209-1686243302687:blk_1073741848_1030 to mirror 127.0.0.1:41817: java.net.ConnectException: Connection refused 2023-06-08 16:55:31,772 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741848_1030 2023-06-08 16:55:31,772 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34592 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741848_1030]] datanode.DataXceiver(323): 127.0.0.1:45233:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34592 dst: /127.0.0.1:45233 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,773 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK] 2023-06-08 16:55:31,774 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243327604 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243331747 2023-06-08 16:55:31,774 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741849_1031 2023-06-08 16:55:31,774 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:31,774 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243327604 is not closed yet, will try archiving it next time 2023-06-08 16:55:31,775 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:31,778 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34602 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741850_1032]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data6/current]'}, localName='127.0.0.1:45233', datanodeUuid='b890f9f5-138c-46e0-afe6-628a77aa48ee', xmitsInProgress=0}:Exception transfering block BP-1723220111-148.251.75.209-1686243302687:blk_1073741850_1032 to mirror 127.0.0.1:35999: java.net.ConnectException: Connection refused 2023-06-08 16:55:31,778 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741850_1032 2023-06-08 16:55:31,778 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_587180065_17 at /127.0.0.1:34602 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741850_1032]] datanode.DataXceiver(323): 127.0.0.1:45233:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34602 dst: /127.0.0.1:45233 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:31,779 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK] 2023-06-08 16:55:31,780 WARN [IPC Server handler 0 on default port 41115] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-08 16:55:31,780 WARN [IPC Server handler 0 on default port 41115] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-08 16:55:31,780 WARN [IPC Server handler 0 on default port 41115] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-08 16:55:31,971 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:31,971 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:31,971 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C41407%2C1686243304460:(num 1686243331747) roll requested 2023-06-08 16:55:31,975 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741852_1034 2023-06-08 16:55:31,976 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:31,978 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741853_1035 2023-06-08 16:55:31,978 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:31,980 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741854_1036 2023-06-08 16:55:31,981 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41817,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK] 2023-06-08 16:55:31,982 WARN [Thread-660] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741855_1037 2023-06-08 16:55:31,983 WARN [Thread-660] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK] 2023-06-08 16:55:31,984 WARN [IPC Server handler 4 on default port 41115] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-06-08 16:55:31,984 WARN [IPC Server handler 4 on default port 41115] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-06-08 16:55:31,984 WARN [IPC Server handler 4 on default port 41115] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-06-08 16:55:31,989 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243331747 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243331971 2023-06-08 16:55:31,989 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:31,989 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243331747 is not closed yet, will try archiving it next time 2023-06-08 16:55:32,178 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-06-08 16:55:32,185 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/.tmp/info/58cb0bd190984024a09fecf9541f67ae 2023-06-08 16:55:32,195 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/.tmp/info/58cb0bd190984024a09fecf9541f67ae as hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/58cb0bd190984024a09fecf9541f67ae 2023-06-08 16:55:32,202 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/58cb0bd190984024a09fecf9541f67ae, entries=5, sequenceid=12, filesize=10.0 K 2023-06-08 16:55:32,203 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 0f108deba6795807c04cf20a4ad86d1d in 449ms, sequenceid=12, compaction requested=false 2023-06-08 16:55:32,204 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0f108deba6795807c04cf20a4ad86d1d: 2023-06-08 16:55:32,390 WARN [Listener at localhost.localdomain/45599] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:32,392 WARN [Listener at localhost.localdomain/45599] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:32,393 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 to hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/oldWALs/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243315347 2023-06-08 16:55:32,393 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:32,398 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/java.io.tmpdir/Jetty_localhost_40515_datanode____y5a4xw/webapp 2023-06-08 16:55:32,469 INFO [Listener at localhost.localdomain/45599] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40515 2023-06-08 16:55:32,476 WARN [Listener at localhost.localdomain/34247] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:32,557 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x962b00409de0d3d7: Processing first storage report for DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853 from datanode 467742bf-4c0e-4765-bdb0-37ffeff8ae1a 2023-06-08 16:55:32,558 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x962b00409de0d3d7: from storage DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853 node DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:55:32,558 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x962b00409de0d3d7: Processing first storage report for DS-5d17193c-ce0b-4889-9c71-95473055db96 from datanode 467742bf-4c0e-4765-bdb0-37ffeff8ae1a 2023-06-08 16:55:32,558 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x962b00409de0d3d7: from storage DS-5d17193c-ce0b-4889-9c71-95473055db96 node DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:33,384 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:33,385 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44707%2C1686243303167:(num 1686243303314) roll requested 2023-06-08 16:55:33,392 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:33,393 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:33,395 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1365030196_17 at /127.0.0.1:34640 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741857_1039]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data6/current]'}, localName='127.0.0.1:45233', datanodeUuid='b890f9f5-138c-46e0-afe6-628a77aa48ee', xmitsInProgress=0}:Exception transfering block BP-1723220111-148.251.75.209-1686243302687:blk_1073741857_1039 to mirror 127.0.0.1:45209: java.net.ConnectException: Connection refused 2023-06-08 16:55:33,395 WARN [Thread-696] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741857_1039 2023-06-08 16:55:33,395 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1365030196_17 at /127.0.0.1:34640 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741857_1039]] datanode.DataXceiver(323): 127.0.0.1:45233:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34640 dst: /127.0.0.1:45233 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:33,396 WARN [Thread-696] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:33,398 WARN [Thread-696] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741858_1040 2023-06-08 16:55:33,398 WARN [Thread-696] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:33,406 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-08 16:55:33,407 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167/jenkins-hbase20.apache.org%2C44707%2C1686243303167.1686243303314 with entries=88, filesize=43.76 KB; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167/jenkins-hbase20.apache.org%2C44707%2C1686243303167.1686243333385 2023-06-08 16:55:33,407 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40025,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK], DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK]] 2023-06-08 16:55:33,407 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167/jenkins-hbase20.apache.org%2C44707%2C1686243303167.1686243303314 is not closed yet, will try archiving it next time 2023-06-08 16:55:33,407 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:33,407 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167/jenkins-hbase20.apache.org%2C44707%2C1686243303167.1686243303314; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:33,777 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1015a688] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45233, datanodeUuid=b890f9f5-138c-46e0-afe6-628a77aa48ee, infoPort=42421, infoSecurePort=0, ipcPort=37785, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741841_1023 to 127.0.0.1:42053 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:33,777 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@39c16ffc] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45233, datanodeUuid=b890f9f5-138c-46e0-afe6-628a77aa48ee, infoPort=42421, infoSecurePort=0, ipcPort=37785, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741851_1033 to 127.0.0.1:45209 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:34,777 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1be6b1c2] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45233, datanodeUuid=b890f9f5-138c-46e0-afe6-628a77aa48ee, infoPort=42421, infoSecurePort=0, ipcPort=37785, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741847_1029 to 127.0.0.1:42053 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:40,559 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@c3b075e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741830_1006 to 127.0.0.1:45209 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:42,562 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6eba577e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741827_1003 to 127.0.0.1:45209 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:42,562 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@375f5382] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741825_1001 to 127.0.0.1:45209 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:45,562 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@630aa3c0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741826_1002 to 127.0.0.1:42053 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:45,562 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3fd066a0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741837_1013 to 127.0.0.1:45209 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:46,561 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3edd783] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741835_1011 to 127.0.0.1:45209 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:46,561 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4ee3473f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40025, datanodeUuid=467742bf-4c0e-4765-bdb0-37ffeff8ae1a, infoPort=33713, infoSecurePort=0, ipcPort=34247, storageInfo=lv=-57;cid=testClusterID;nsid=1895245100;c=1686243302687):Failed to transfer BP-1723220111-148.251.75.209-1686243302687:blk_1073741831_1007 to 127.0.0.1:42053 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:51,086 INFO [Listener at localhost.localdomain/34247] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243331971 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243351069 2023-06-08 16:55:51,086 DEBUG [Listener at localhost.localdomain/34247] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK], DatanodeInfoWithStorage[127.0.0.1:40025,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]] 2023-06-08 16:55:51,086 DEBUG [Listener at localhost.localdomain/34247] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460/jenkins-hbase20.apache.org%2C41407%2C1686243304460.1686243331971 is not closed yet, will try archiving it next time 2023-06-08 16:55:51,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41407] regionserver.HRegion(9158): Flush requested on 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:51,091 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0f108deba6795807c04cf20a4ad86d1d 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-06-08 16:55:51,092 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-06-08 16:55:51,100 WARN [Thread-729] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741861_1043 2023-06-08 16:55:51,100 WARN [Thread-729] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:51,102 WARN [Thread-729] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741862_1044 2023-06-08 16:55:51,102 WARN [Thread-729] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:51,108 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 16:55:51,108 INFO [Listener at localhost.localdomain/34247] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 16:55:51,108 DEBUG [Listener at localhost.localdomain/34247] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x443da58d to 127.0.0.1:53698 2023-06-08 16:55:51,109 DEBUG [Listener at localhost.localdomain/34247] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,110 DEBUG [Listener at localhost.localdomain/34247] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 16:55:51,110 DEBUG [Listener at localhost.localdomain/34247] util.JVMClusterUtil(257): Found active master hash=1351390084, stopped=false 2023-06-08 16:55:51,110 INFO [Listener at localhost.localdomain/34247] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:51,114 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:51,114 INFO [Listener at localhost.localdomain/34247] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 16:55:51,114 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:51,114 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:51,114 DEBUG [Listener at localhost.localdomain/34247] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x06073e90 to 127.0.0.1:53698 2023-06-08 16:55:51,114 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:51,114 DEBUG [Listener at localhost.localdomain/34247] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:51,115 INFO [Listener at localhost.localdomain/34247] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38967,1686243303213' ***** 2023-06-08 16:55:51,115 INFO [Listener at localhost.localdomain/34247] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:55:51,115 INFO [Listener at localhost.localdomain/34247] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,41407,1686243304460' ***** 2023-06-08 16:55:51,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:51,115 INFO [Listener at localhost.localdomain/34247] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:55:51,115 INFO [RS:0;jenkins-hbase20:38967] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:55:51,115 INFO [RS:1;jenkins-hbase20:41407] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:55:51,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:51,115 INFO [RS:0;jenkins-hbase20:38967] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:55:51,115 INFO [RS:0;jenkins-hbase20:38967] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:55:51,116 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(3303): Received CLOSE for 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:51,116 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:51,116 DEBUG [RS:0;jenkins-hbase20:38967] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x756abff9 to 127.0.0.1:53698 2023-06-08 16:55:51,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 4ca15ea9ab6b9efd109f2ba06e32576b, disabling compactions & flushes 2023-06-08 16:55:51,116 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:55:51,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,116 DEBUG [RS:0;jenkins-hbase20:38967] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,117 INFO [RS:0;jenkins-hbase20:38967] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:55:51,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. after waiting 0 ms 2023-06-08 16:55:51,117 INFO [RS:0;jenkins-hbase20:38967] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:55:51,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,117 INFO [RS:0;jenkins-hbase20:38967] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:55:51,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 4ca15ea9ab6b9efd109f2ba06e32576b 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 16:55:51,117 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:55:51,117 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-08 16:55:51,117 WARN [RS:0;jenkins-hbase20:38967.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,117 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 4ca15ea9ab6b9efd109f2ba06e32576b=hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b.} 2023-06-08 16:55:51,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 4ca15ea9ab6b9efd109f2ba06e32576b: 2023-06-08 16:55:51,119 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38967%2C1686243303213:(num 1686243303599) roll requested 2023-06-08 16:55:51,119 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:55:51,119 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1504): Waiting on 1588230740, 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:51,119 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,38967,1686243303213: Unrecoverable exception while closing hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,119 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:55:51,121 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-08 16:55:51,121 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:55:51,120 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/.tmp/info/2b2a752e497646c4a42ed45e15ba9303 2023-06-08 16:55:51,121 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:55:51,121 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:55:51,121 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:55:51,121 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 16:55:51,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-08 16:55:51,126 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:52034 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data4/current]'}, localName='127.0.0.1:40025', datanodeUuid='467742bf-4c0e-4765-bdb0-37ffeff8ae1a', xmitsInProgress=0}:Exception transfering block BP-1723220111-148.251.75.209-1686243302687:blk_1073741864_1046 to mirror 127.0.0.1:42053: java.net.ConnectException: Connection refused 2023-06-08 16:55:51,126 WARN [Thread-736] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741864_1046 2023-06-08 16:55:51,127 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1959771105_17 at /127.0.0.1:52034 [Receiving block BP-1723220111-148.251.75.209-1686243302687:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:40025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52034 dst: /127.0.0.1:40025 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:55:51,127 WARN [Thread-736] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:51,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-08 16:55:51,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-08 16:55:51,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-08 16:55:51,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1028653056, "init": 524288000, "max": 2051014656, "used": 336562888 }, "NonHeapMemoryUsage": { "committed": 133521408, "init": 2555904, "max": -1, "used": 131068232 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-08 16:55:51,128 WARN [Thread-736] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741865_1047 2023-06-08 16:55:51,129 WARN [Thread-736] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:51,130 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/.tmp/info/2b2a752e497646c4a42ed45e15ba9303 as hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/2b2a752e497646c4a42ed45e15ba9303 2023-06-08 16:55:51,136 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44707] master.MasterRpcServices(609): jenkins-hbase20.apache.org,38967,1686243303213 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,38967,1686243303213: Unrecoverable exception while closing hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,137 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-08 16:55:51,139 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.1686243303599 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.1686243351119 2023-06-08 16:55:51,139 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45233,DS-17c57e98-0b7d-4ece-9449-2d30299f0570,DISK], DatanodeInfoWithStorage[127.0.0.1:40025,DS-7bbf8e6b-ab67-452d-9ab5-c145245a9853,DISK]] 2023-06-08 16:55:51,139 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,139 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.1686243303599 is not closed yet, will try archiving it next time 2023-06-08 16:55:51,139 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213/jenkins-hbase20.apache.org%2C38967%2C1686243303213.1686243303599; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,145 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/2b2a752e497646c4a42ed45e15ba9303, entries=8, sequenceid=25, filesize=13.2 K 2023-06-08 16:55:51,146 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 0f108deba6795807c04cf20a4ad86d1d in 56ms, sequenceid=25, compaction requested=false 2023-06-08 16:55:51,147 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0f108deba6795807c04cf20a4ad86d1d: 2023-06-08 16:55:51,147 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-06-08 16:55:51,147 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:55:51,147 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/2b2a752e497646c4a42ed45e15ba9303 because midkey is the same as first or last row 2023-06-08 16:55:51,147 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:55:51,147 INFO [RS:1;jenkins-hbase20:41407] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:55:51,147 INFO [RS:1;jenkins-hbase20:41407] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:55:51,147 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(3303): Received CLOSE for 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:51,147 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:51,147 DEBUG [RS:1;jenkins-hbase20:41407] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c0872ca to 127.0.0.1:53698 2023-06-08 16:55:51,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0f108deba6795807c04cf20a4ad86d1d, disabling compactions & flushes 2023-06-08 16:55:51,147 DEBUG [RS:1;jenkins-hbase20:41407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:51,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:51,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. after waiting 0 ms 2023-06-08 16:55:51,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:51,147 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-06-08 16:55:51,148 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 0f108deba6795807c04cf20a4ad86d1d 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-08 16:55:51,148 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1478): Online Regions={0f108deba6795807c04cf20a4ad86d1d=TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d.} 2023-06-08 16:55:51,148 DEBUG [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1504): Waiting on 0f108deba6795807c04cf20a4ad86d1d 2023-06-08 16:55:51,153 WARN [Thread-746] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741867_1049 2023-06-08 16:55:51,154 WARN [Thread-746] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:51,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/.tmp/info/a14492389faa43cbbffe689eec98fe0f 2023-06-08 16:55:51,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/.tmp/info/a14492389faa43cbbffe689eec98fe0f as hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/a14492389faa43cbbffe689eec98fe0f 2023-06-08 16:55:51,180 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/info/a14492389faa43cbbffe689eec98fe0f, entries=9, sequenceid=37, filesize=14.2 K 2023-06-08 16:55:51,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 0f108deba6795807c04cf20a4ad86d1d in 33ms, sequenceid=37, compaction requested=true 2023-06-08 16:55:51,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0f108deba6795807c04cf20a4ad86d1d/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-06-08 16:55:51,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:51,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0f108deba6795807c04cf20a4ad86d1d: 2023-06-08 16:55:51,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1686243304556.0f108deba6795807c04cf20a4ad86d1d. 2023-06-08 16:55:51,320 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:55:51,320 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(3303): Received CLOSE for 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:51,320 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:55:51,320 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1504): Waiting on 1588230740, 4ca15ea9ab6b9efd109f2ba06e32576b 2023-06-08 16:55:51,320 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:55:51,321 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:55:51,321 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:55:51,321 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:55:51,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 4ca15ea9ab6b9efd109f2ba06e32576b, disabling compactions & flushes 2023-06-08 16:55:51,321 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:55:51,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,322 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 16:55:51,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. after waiting 0 ms 2023-06-08 16:55:51,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 4ca15ea9ab6b9efd109f2ba06e32576b: 2023-06-08 16:55:51,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686243303843.4ca15ea9ab6b9efd109f2ba06e32576b. 2023-06-08 16:55:51,348 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,41407,1686243304460; all regions closed. 2023-06-08 16:55:51,349 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:51,366 DEBUG [RS:1;jenkins-hbase20:41407] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/oldWALs 2023-06-08 16:55:51,367 INFO [RS:1;jenkins-hbase20:41407] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C41407%2C1686243304460:(num 1686243351069) 2023-06-08 16:55:51,367 DEBUG [RS:1;jenkins-hbase20:41407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,367 INFO [RS:1;jenkins-hbase20:41407] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:55:51,367 INFO [RS:1;jenkins-hbase20:41407] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 16:55:51,367 INFO [RS:1;jenkins-hbase20:41407] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:55:51,367 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:55:51,367 INFO [RS:1;jenkins-hbase20:41407] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:55:51,367 INFO [RS:1;jenkins-hbase20:41407] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:55:51,368 INFO [RS:1;jenkins-hbase20:41407] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:41407 2023-06-08 16:55:51,371 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:51,371 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:51,371 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:51,371 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41407,1686243304460 2023-06-08 16:55:51,371 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:51,372 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,41407,1686243304460] 2023-06-08 16:55:51,372 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,41407,1686243304460; numProcessing=1 2023-06-08 16:55:51,373 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,41407,1686243304460 already deleted, retry=false 2023-06-08 16:55:51,373 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,41407,1686243304460 expired; onlineServers=1 2023-06-08 16:55:51,474 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:55:51,520 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-08 16:55:51,520 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-08 16:55:51,521 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-08 16:55:51,521 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38967,1686243303213; all regions closed. 2023-06-08 16:55:51,522 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:51,522 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,525 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/WALs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:51,532 ERROR [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... 2023-06-08 16:55:51,532 DEBUG [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35999,DS-ca2f5a60-f3b7-4cbe-9e7f-af6972f893b2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:55:51,532 DEBUG [RS:0;jenkins-hbase20:38967] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,532 INFO [RS:0;jenkins-hbase20:38967] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:55:51,533 INFO [RS:0;jenkins-hbase20:38967] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 16:55:51,533 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:55:51,534 INFO [RS:0;jenkins-hbase20:38967] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38967 2023-06-08 16:55:51,536 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38967,1686243303213 2023-06-08 16:55:51,536 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:51,537 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38967,1686243303213] 2023-06-08 16:55:51,537 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38967,1686243303213; numProcessing=2 2023-06-08 16:55:51,538 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38967,1686243303213 already deleted, retry=false 2023-06-08 16:55:51,538 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38967,1686243303213 expired; onlineServers=0 2023-06-08 16:55:51,538 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44707,1686243303167' ***** 2023-06-08 16:55:51,538 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 16:55:51,539 DEBUG [M:0;jenkins-hbase20:44707] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54b31d4d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:55:51,539 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:51,539 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44707,1686243303167; all regions closed. 2023-06-08 16:55:51,539 DEBUG [M:0;jenkins-hbase20:44707] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:55:51,539 DEBUG [M:0;jenkins-hbase20:44707] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 16:55:51,540 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 16:55:51,540 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243303388] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243303388,5,FailOnTimeoutGroup] 2023-06-08 16:55:51,540 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243303388] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243303388,5,FailOnTimeoutGroup] 2023-06-08 16:55:51,540 DEBUG [M:0;jenkins-hbase20:44707] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 16:55:51,541 INFO [M:0;jenkins-hbase20:44707] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 16:55:51,541 INFO [M:0;jenkins-hbase20:44707] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 16:55:51,541 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 16:55:51,541 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:51,541 INFO [M:0;jenkins-hbase20:44707] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-08 16:55:51,542 DEBUG [M:0;jenkins-hbase20:44707] master.HMaster(1512): Stopping service threads 2023-06-08 16:55:51,542 INFO [M:0;jenkins-hbase20:44707] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 16:55:51,542 ERROR [M:0;jenkins-hbase20:44707] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 16:55:51,542 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:55:51,542 INFO [M:0;jenkins-hbase20:44707] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 16:55:51,543 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 16:55:51,543 DEBUG [M:0;jenkins-hbase20:44707] zookeeper.ZKUtil(398): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 16:55:51,543 WARN [M:0;jenkins-hbase20:44707] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 16:55:51,543 INFO [M:0;jenkins-hbase20:44707] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 16:55:51,544 INFO [M:0;jenkins-hbase20:44707] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 16:55:51,544 DEBUG [M:0;jenkins-hbase20:44707] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:55:51,544 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:51,544 DEBUG [M:0;jenkins-hbase20:44707] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:51,544 DEBUG [M:0;jenkins-hbase20:44707] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:55:51,544 DEBUG [M:0;jenkins-hbase20:44707] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:51,545 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.13 KB heapSize=45.77 KB 2023-06-08 16:55:51,553 WARN [Thread-754] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741869_1051 2023-06-08 16:55:51,553 WARN [Thread-754] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45209,DS-9b7ce5c9-addc-428e-ab71-5ce9b73906b1,DISK] 2023-06-08 16:55:51,555 WARN [Thread-754] hdfs.DataStreamer(1658): Abandoning BP-1723220111-148.251.75.209-1686243302687:blk_1073741870_1052 2023-06-08 16:55:51,555 WARN [Thread-754] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42053,DS-9e192c7c-b0c9-4ac8-8c93-ebfe35e9c1dd,DISK] 2023-06-08 16:55:51,563 INFO [M:0;jenkins-hbase20:44707] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.13 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/89017057b805431e8def4eb5e7a3a165 2023-06-08 16:55:51,569 DEBUG [M:0;jenkins-hbase20:44707] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/89017057b805431e8def4eb5e7a3a165 as hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/89017057b805431e8def4eb5e7a3a165 2023-06-08 16:55:51,575 INFO [M:0;jenkins-hbase20:44707] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41115/user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/89017057b805431e8def4eb5e7a3a165, entries=11, sequenceid=92, filesize=7.0 K 2023-06-08 16:55:51,577 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegion(2948): Finished flush of dataSize ~38.13 KB/39047, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 33ms, sequenceid=92, compaction requested=false 2023-06-08 16:55:51,578 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:51,578 DEBUG [M:0;jenkins-hbase20:44707] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:55:51,579 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4ff66e8c-1b01-a08a-fce8-ad2eac448b06/MasterData/WALs/jenkins-hbase20.apache.org,44707,1686243303167 2023-06-08 16:55:51,582 INFO [M:0;jenkins-hbase20:44707] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 16:55:51,583 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:55:51,583 INFO [M:0;jenkins-hbase20:44707] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44707 2023-06-08 16:55:51,584 DEBUG [M:0;jenkins-hbase20:44707] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44707,1686243303167 already deleted, retry=false 2023-06-08 16:55:51,616 INFO [RS:1;jenkins-hbase20:41407] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,41407,1686243304460; zookeeper connection closed. 2023-06-08 16:55:51,616 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:51,616 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:41407-0x101cba4f7120005, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:51,617 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@53b36391] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@53b36391 2023-06-08 16:55:51,717 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:51,717 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): master:44707-0x101cba4f7120000, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:51,717 INFO [M:0;jenkins-hbase20:44707] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44707,1686243303167; zookeeper connection closed. 2023-06-08 16:55:51,817 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:51,817 INFO [RS:0;jenkins-hbase20:38967] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38967,1686243303213; zookeeper connection closed. 2023-06-08 16:55:51,817 DEBUG [Listener at localhost.localdomain/46243-EventThread] zookeeper.ZKWatcher(600): regionserver:38967-0x101cba4f7120001, quorum=127.0.0.1:53698, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:55:51,818 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34b800af] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34b800af 2023-06-08 16:55:51,819 INFO [Listener at localhost.localdomain/34247] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-06-08 16:55:51,819 WARN [Listener at localhost.localdomain/34247] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:51,825 INFO [Listener at localhost.localdomain/34247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:51,935 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:51,936 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1723220111-148.251.75.209-1686243302687 (Datanode Uuid 467742bf-4c0e-4765-bdb0-37ffeff8ae1a) service to localhost.localdomain/127.0.0.1:41115 2023-06-08 16:55:51,936 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data3/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:51,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data4/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:51,940 WARN [Listener at localhost.localdomain/34247] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:55:51,945 INFO [Listener at localhost.localdomain/34247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:55:52,053 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:55:52,053 WARN [BP-1723220111-148.251.75.209-1686243302687 heartbeating to localhost.localdomain/127.0.0.1:41115] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1723220111-148.251.75.209-1686243302687 (Datanode Uuid b890f9f5-138c-46e0-afe6-628a77aa48ee) service to localhost.localdomain/127.0.0.1:41115 2023-06-08 16:55:52,054 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data5/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:52,055 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/cluster_6ce0e350-0cdf-eede-66c7-5cb864cdfac2/dfs/data/data6/current/BP-1723220111-148.251.75.209-1686243302687] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:55:52,067 INFO [Listener at localhost.localdomain/34247] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 16:55:52,180 INFO [Listener at localhost.localdomain/34247] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 16:55:52,211 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 16:55:52,222 INFO [Listener at localhost.localdomain/34247] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 52) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:41115 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:41115 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41115 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-10-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-10-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41115 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost.localdomain/34247 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-10-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41115 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41115 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-11-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41115 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=459 (was 441) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=53 (was 84), ProcessCount=187 (was 187), AvailableMemoryMB=2104 (was 2324) 2023-06-08 16:55:52,231 INFO [Listener at localhost.localdomain/34247] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=459, MaxFileDescriptor=60000, SystemLoadAverage=53, ProcessCount=187, AvailableMemoryMB=2104 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/hadoop.log.dir so I do NOT create it in target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94f28d2f-f88a-cf79-275f-968cd662bfb4/hadoop.tmp.dir so I do NOT create it in target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3, deleteOnExit=true 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/test.cache.data in system properties and HBase conf 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 16:55:52,232 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/hadoop.log.dir in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 16:55:52,233 DEBUG [Listener at localhost.localdomain/34247] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:55:52,233 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/nfs.dump.dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 16:55:52,234 INFO [Listener at localhost.localdomain/34247] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 16:55:52,235 WARN [Listener at localhost.localdomain/34247] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:55:52,237 WARN [Listener at localhost.localdomain/34247] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:55:52,237 WARN [Listener at localhost.localdomain/34247] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:55:52,260 WARN [Listener at localhost.localdomain/34247] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:52,262 INFO [Listener at localhost.localdomain/34247] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:52,267 INFO [Listener at localhost.localdomain/34247] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_localdomain_35891_hdfs____.m42dmg/webapp 2023-06-08 16:55:52,337 INFO [Listener at localhost.localdomain/34247] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35891 2023-06-08 16:55:52,339 WARN [Listener at localhost.localdomain/34247] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:55:52,340 WARN [Listener at localhost.localdomain/34247] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:55:52,340 WARN [Listener at localhost.localdomain/34247] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:55:52,365 WARN [Listener at localhost.localdomain/41831] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:52,378 WARN [Listener at localhost.localdomain/41831] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:52,380 WARN [Listener at localhost.localdomain/41831] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:52,382 INFO [Listener at localhost.localdomain/41831] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:52,387 INFO [Listener at localhost.localdomain/41831] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_42257_datanode____.tnju9v/webapp 2023-06-08 16:55:52,460 INFO [Listener at localhost.localdomain/41831] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42257 2023-06-08 16:55:52,465 WARN [Listener at localhost.localdomain/37365] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:52,482 WARN [Listener at localhost.localdomain/37365] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:55:52,485 WARN [Listener at localhost.localdomain/37365] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:55:52,487 INFO [Listener at localhost.localdomain/37365] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:55:52,491 INFO [Listener at localhost.localdomain/37365] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_43633_datanode____qqc6n2/webapp 2023-06-08 16:55:52,528 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x22ee5dd0f343aaf: Processing first storage report for DS-fd375f55-ae42-4931-984a-d4ab7aaaefff from datanode 0c68d5bf-79ac-43fc-a534-69fe8a4918f3 2023-06-08 16:55:52,528 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x22ee5dd0f343aaf: from storage DS-fd375f55-ae42-4931-984a-d4ab7aaaefff node DatanodeRegistration(127.0.0.1:44491, datanodeUuid=0c68d5bf-79ac-43fc-a534-69fe8a4918f3, infoPort=37103, infoSecurePort=0, ipcPort=37365, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:52,528 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x22ee5dd0f343aaf: Processing first storage report for DS-c54a996a-606a-4e49-b881-e534da990fe5 from datanode 0c68d5bf-79ac-43fc-a534-69fe8a4918f3 2023-06-08 16:55:52,528 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x22ee5dd0f343aaf: from storage DS-c54a996a-606a-4e49-b881-e534da990fe5 node DatanodeRegistration(127.0.0.1:44491, datanodeUuid=0c68d5bf-79ac-43fc-a534-69fe8a4918f3, infoPort=37103, infoSecurePort=0, ipcPort=37365, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:52,530 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:55:52,570 INFO [Listener at localhost.localdomain/37365] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43633 2023-06-08 16:55:52,577 WARN [Listener at localhost.localdomain/36827] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:55:52,651 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6df260988852200d: Processing first storage report for DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e from datanode c2261922-210e-4eb1-913b-e3161fd72dd2 2023-06-08 16:55:52,652 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6df260988852200d: from storage DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e node DatanodeRegistration(127.0.0.1:33675, datanodeUuid=c2261922-210e-4eb1-913b-e3161fd72dd2, infoPort=46489, infoSecurePort=0, ipcPort=36827, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:52,652 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6df260988852200d: Processing first storage report for DS-82443175-e49b-420b-90f5-2e5523b99999 from datanode c2261922-210e-4eb1-913b-e3161fd72dd2 2023-06-08 16:55:52,652 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6df260988852200d: from storage DS-82443175-e49b-420b-90f5-2e5523b99999 node DatanodeRegistration(127.0.0.1:33675, datanodeUuid=c2261922-210e-4eb1-913b-e3161fd72dd2, infoPort=46489, infoSecurePort=0, ipcPort=36827, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:55:52,686 DEBUG [Listener at localhost.localdomain/36827] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed 2023-06-08 16:55:52,690 INFO [Listener at localhost.localdomain/36827] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/zookeeper_0, clientPort=59477, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 16:55:52,691 INFO [Listener at localhost.localdomain/36827] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59477 2023-06-08 16:55:52,692 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,692 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,710 INFO [Listener at localhost.localdomain/36827] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5 with version=8 2023-06-08 16:55:52,710 INFO [Listener at localhost.localdomain/36827] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase-staging 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:55:52,712 INFO [Listener at localhost.localdomain/36827] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:55:52,713 INFO [Listener at localhost.localdomain/36827] ipc.NettyRpcServer(120): Bind to /148.251.75.209:35513 2023-06-08 16:55:52,714 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,715 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,716 INFO [Listener at localhost.localdomain/36827] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35513 connecting to ZooKeeper ensemble=127.0.0.1:59477 2023-06-08 16:55:52,721 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:355130x0, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:55:52,721 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35513-0x101cba5b89d0000 connected 2023-06-08 16:55:52,734 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:55:52,735 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:52,735 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:55:52,736 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35513 2023-06-08 16:55:52,736 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35513 2023-06-08 16:55:52,736 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35513 2023-06-08 16:55:52,737 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35513 2023-06-08 16:55:52,737 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35513 2023-06-08 16:55:52,737 INFO [Listener at localhost.localdomain/36827] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5, hbase.cluster.distributed=false 2023-06-08 16:55:52,752 INFO [Listener at localhost.localdomain/36827] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:55:52,752 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:52,752 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:52,752 INFO [Listener at localhost.localdomain/36827] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:55:52,752 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:55:52,753 INFO [Listener at localhost.localdomain/36827] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:55:52,753 INFO [Listener at localhost.localdomain/36827] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:55:52,754 INFO [Listener at localhost.localdomain/36827] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44489 2023-06-08 16:55:52,754 INFO [Listener at localhost.localdomain/36827] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:55:52,758 DEBUG [Listener at localhost.localdomain/36827] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:55:52,758 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,759 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,760 INFO [Listener at localhost.localdomain/36827] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44489 connecting to ZooKeeper ensemble=127.0.0.1:59477 2023-06-08 16:55:52,763 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:444890x0, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:55:52,764 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ZKUtil(164): regionserver:444890x0, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:55:52,764 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44489-0x101cba5b89d0001 connected 2023-06-08 16:55:52,765 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ZKUtil(164): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:55:52,766 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ZKUtil(164): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:55:52,766 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44489 2023-06-08 16:55:52,766 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44489 2023-06-08 16:55:52,766 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44489 2023-06-08 16:55:52,767 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44489 2023-06-08 16:55:52,767 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44489 2023-06-08 16:55:52,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,768 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:55:52,769 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,770 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:55:52,770 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:55:52,770 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:52,770 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:55:52,771 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,35513,1686243352711 from backup master directory 2023-06-08 16:55:52,771 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:55:52,772 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,772 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:55:52,772 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,772 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:55:52,786 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/hbase.id with ID: fa09af26-dce9-4890-b236-20fe02bee9a3 2023-06-08 16:55:52,796 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:52,798 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:52,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x75220940 to 127.0.0.1:59477 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:52,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47208404, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:52,810 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:52,811 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 16:55:52,811 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:52,812 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store-tmp 2023-06-08 16:55:52,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:52,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:55:52,825 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:52,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:52,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:55:52,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:52,825 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:55:52,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:55:52,826 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,829 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C35513%2C1686243352711, suffix=, logDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711, archiveDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/oldWALs, maxLogs=10 2023-06-08 16:55:52,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711/jenkins-hbase20.apache.org%2C35513%2C1686243352711.1686243352829 2023-06-08 16:55:52,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] 2023-06-08 16:55:52,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:52,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:52,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:52,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:52,842 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:52,843 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 16:55:52,844 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 16:55:52,844 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:52,845 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:52,845 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:52,848 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:55:52,850 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:52,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=844330, jitterRate=0.07362164556980133}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:55:52,851 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:55:52,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 16:55:52,853 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 16:55:52,853 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 16:55:52,853 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 16:55:52,853 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 16:55:52,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 16:55:52,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 16:55:52,858 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 16:55:52,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 16:55:52,869 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 16:55:52,869 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 16:55:52,869 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 16:55:52,870 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 16:55:52,870 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 16:55:52,872 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:52,872 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 16:55:52,872 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 16:55:52,873 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 16:55:52,874 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:52,874 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:55:52,874 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:52,874 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,35513,1686243352711, sessionid=0x101cba5b89d0000, setting cluster-up flag (Was=false) 2023-06-08 16:55:52,877 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:52,880 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 16:55:52,881 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,883 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:52,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 16:55:52,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:52,887 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.hbase-snapshot/.tmp 2023-06-08 16:55:52,892 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 16:55:52,892 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:55:52,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:52,895 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686243382895 2023-06-08 16:55:52,895 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 16:55:52,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 16:55:52,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 16:55:52,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 16:55:52,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 16:55:52,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 16:55:52,896 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:52,898 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:55:52,898 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 16:55:52,899 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 16:55:52,899 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 16:55:52,899 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 16:55:52,899 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 16:55:52,899 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 16:55:52,900 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:52,901 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243352899,5,FailOnTimeoutGroup] 2023-06-08 16:55:52,905 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243352902,5,FailOnTimeoutGroup] 2023-06-08 16:55:52,905 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:52,905 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 16:55:52,905 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:52,905 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:52,912 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:52,912 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:52,913 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5 2023-06-08 16:55:52,920 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:52,922 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:55:52,923 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/info 2023-06-08 16:55:52,924 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:55:52,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:52,925 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:55:52,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:55:52,926 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:55:52,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:52,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:55:52,928 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/table 2023-06-08 16:55:52,929 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:55:52,930 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:52,930 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740 2023-06-08 16:55:52,931 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740 2023-06-08 16:55:52,933 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:55:52,934 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:55:52,936 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:52,936 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=744292, jitterRate=-0.05358433723449707}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:55:52,937 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:55:52,937 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:55:52,937 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:55:52,937 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:55:52,937 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:55:52,937 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:55:52,937 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:55:52,937 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:55:52,938 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:55:52,938 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 16:55:52,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 16:55:52,940 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 16:55:52,941 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 16:55:52,969 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(951): ClusterId : fa09af26-dce9-4890-b236-20fe02bee9a3 2023-06-08 16:55:52,970 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:55:52,973 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:55:52,973 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:55:52,975 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:55:52,976 DEBUG [RS:0;jenkins-hbase20:44489] zookeeper.ReadOnlyZKClient(139): Connect 0x07a38e45 to 127.0.0.1:59477 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:52,984 DEBUG [RS:0;jenkins-hbase20:44489] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39c0a69, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:52,984 DEBUG [RS:0;jenkins-hbase20:44489] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@469e847e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:55:52,992 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:44489 2023-06-08 16:55:52,992 INFO [RS:0;jenkins-hbase20:44489] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:55:52,992 INFO [RS:0;jenkins-hbase20:44489] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:55:52,992 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:55:52,993 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,35513,1686243352711 with isa=jenkins-hbase20.apache.org/148.251.75.209:44489, startcode=1686243352751 2023-06-08 16:55:52,993 DEBUG [RS:0;jenkins-hbase20:44489] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:55:52,996 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52945, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:55:52,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:52,998 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5 2023-06-08 16:55:52,998 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41831 2023-06-08 16:55:52,998 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:55:52,999 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:55:53,000 DEBUG [RS:0;jenkins-hbase20:44489] zookeeper.ZKUtil(162): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,000 WARN [RS:0;jenkins-hbase20:44489] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:55:53,000 INFO [RS:0;jenkins-hbase20:44489] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:53,001 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,001 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44489,1686243352751] 2023-06-08 16:55:53,004 DEBUG [RS:0;jenkins-hbase20:44489] zookeeper.ZKUtil(162): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,005 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:55:53,005 INFO [RS:0;jenkins-hbase20:44489] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:55:53,006 INFO [RS:0;jenkins-hbase20:44489] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:55:53,007 INFO [RS:0;jenkins-hbase20:44489] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:55:53,007 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,010 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:55:53,011 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,011 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,012 DEBUG [RS:0;jenkins-hbase20:44489] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:55:53,014 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,015 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,015 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,025 INFO [RS:0;jenkins-hbase20:44489] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:55:53,025 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44489,1686243352751-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,036 INFO [RS:0;jenkins-hbase20:44489] regionserver.Replication(203): jenkins-hbase20.apache.org,44489,1686243352751 started 2023-06-08 16:55:53,036 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44489,1686243352751, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44489, sessionid=0x101cba5b89d0001 2023-06-08 16:55:53,036 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:55:53,036 DEBUG [RS:0;jenkins-hbase20:44489] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,036 DEBUG [RS:0;jenkins-hbase20:44489] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44489,1686243352751' 2023-06-08 16:55:53,036 DEBUG [RS:0;jenkins-hbase20:44489] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:55:53,037 DEBUG [RS:0;jenkins-hbase20:44489] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:55:53,037 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:55:53,037 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:55:53,037 DEBUG [RS:0;jenkins-hbase20:44489] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,037 DEBUG [RS:0;jenkins-hbase20:44489] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44489,1686243352751' 2023-06-08 16:55:53,037 DEBUG [RS:0;jenkins-hbase20:44489] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:55:53,038 DEBUG [RS:0;jenkins-hbase20:44489] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:55:53,038 DEBUG [RS:0;jenkins-hbase20:44489] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:55:53,038 INFO [RS:0;jenkins-hbase20:44489] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:55:53,038 INFO [RS:0;jenkins-hbase20:44489] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:55:53,092 DEBUG [jenkins-hbase20:35513] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 16:55:53,092 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44489,1686243352751, state=OPENING 2023-06-08 16:55:53,094 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 16:55:53,094 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:53,095 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44489,1686243352751}] 2023-06-08 16:55:53,095 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:55:53,141 INFO [RS:0;jenkins-hbase20:44489] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44489%2C1686243352751, suffix=, logDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751, archiveDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/oldWALs, maxLogs=32 2023-06-08 16:55:53,154 INFO [RS:0;jenkins-hbase20:44489] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 2023-06-08 16:55:53,154 DEBUG [RS:0;jenkins-hbase20:44489] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK], DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] 2023-06-08 16:55:53,249 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,249 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:55:53,252 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:55:53,256 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 16:55:53,257 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:55:53,259 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44489%2C1686243352751.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751, archiveDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/oldWALs, maxLogs=32 2023-06-08 16:55:53,268 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.meta.1686243353260.meta 2023-06-08 16:55:53,268 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] 2023-06-08 16:55:53,268 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:53,268 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 16:55:53,269 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 16:55:53,269 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 16:55:53,269 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 16:55:53,269 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:53,269 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 16:55:53,269 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 16:55:53,271 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:55:53,272 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/info 2023-06-08 16:55:53,272 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/info 2023-06-08 16:55:53,273 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:55:53,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:53,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:55:53,275 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:55:53,275 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:55:53,275 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:55:53,276 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:53,276 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:55:53,277 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/table 2023-06-08 16:55:53,277 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740/table 2023-06-08 16:55:53,277 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:55:53,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:53,279 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740 2023-06-08 16:55:53,280 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/meta/1588230740 2023-06-08 16:55:53,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:55:53,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:55:53,284 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=740552, jitterRate=-0.05833953619003296}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:55:53,284 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:55:53,286 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686243353249 2023-06-08 16:55:53,290 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 16:55:53,291 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 16:55:53,292 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44489,1686243352751, state=OPEN 2023-06-08 16:55:53,293 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 16:55:53,293 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:55:53,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 16:55:53,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44489,1686243352751 in 198 msec 2023-06-08 16:55:53,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 16:55:53,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 358 msec 2023-06-08 16:55:53,301 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 410 msec 2023-06-08 16:55:53,301 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686243353301, completionTime=-1 2023-06-08 16:55:53,301 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 16:55:53,301 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 16:55:53,304 DEBUG [hconnection-0x58a07d94-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:55:53,306 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:55:53,308 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 16:55:53,308 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686243413308 2023-06-08 16:55:53,308 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686243473308 2023-06-08 16:55:53,308 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-08 16:55:53,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35513,1686243352711-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35513,1686243352711-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35513,1686243352711-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:35513, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 16:55:53,314 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 16:55:53,315 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:53,316 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 16:55:53,317 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 16:55:53,318 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:55:53,319 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:55:53,321 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,322 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918 empty. 2023-06-08 16:55:53,322 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,322 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 16:55:53,337 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:53,339 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c27fb35ef4358c4e67ecf0e4f4382918, NAME => 'hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp 2023-06-08 16:55:53,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:53,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c27fb35ef4358c4e67ecf0e4f4382918, disabling compactions & flushes 2023-06-08 16:55:53,348 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. after waiting 0 ms 2023-06-08 16:55:53,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,348 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c27fb35ef4358c4e67ecf0e4f4382918: 2023-06-08 16:55:53,351 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:55:53,352 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243353352"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243353352"}]},"ts":"1686243353352"} 2023-06-08 16:55:53,354 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:55:53,356 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:55:53,356 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243353356"}]},"ts":"1686243353356"} 2023-06-08 16:55:53,357 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 16:55:53,361 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c27fb35ef4358c4e67ecf0e4f4382918, ASSIGN}] 2023-06-08 16:55:53,363 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c27fb35ef4358c4e67ecf0e4f4382918, ASSIGN 2023-06-08 16:55:53,364 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c27fb35ef4358c4e67ecf0e4f4382918, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44489,1686243352751; forceNewPlan=false, retain=false 2023-06-08 16:55:53,516 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c27fb35ef4358c4e67ecf0e4f4382918, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,517 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243353516"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243353516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243353516"}]},"ts":"1686243353516"} 2023-06-08 16:55:53,522 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c27fb35ef4358c4e67ecf0e4f4382918, server=jenkins-hbase20.apache.org,44489,1686243352751}] 2023-06-08 16:55:53,684 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c27fb35ef4358c4e67ecf0e4f4382918, NAME => 'hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:53,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:53,686 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,686 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,688 INFO [StoreOpener-c27fb35ef4358c4e67ecf0e4f4382918-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,690 DEBUG [StoreOpener-c27fb35ef4358c4e67ecf0e4f4382918-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918/info 2023-06-08 16:55:53,690 DEBUG [StoreOpener-c27fb35ef4358c4e67ecf0e4f4382918-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918/info 2023-06-08 16:55:53,691 INFO [StoreOpener-c27fb35ef4358c4e67ecf0e4f4382918-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c27fb35ef4358c4e67ecf0e4f4382918 columnFamilyName info 2023-06-08 16:55:53,691 INFO [StoreOpener-c27fb35ef4358c4e67ecf0e4f4382918-1] regionserver.HStore(310): Store=c27fb35ef4358c4e67ecf0e4f4382918/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:53,693 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,693 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,697 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:55:53,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/hbase/namespace/c27fb35ef4358c4e67ecf0e4f4382918/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:53,702 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c27fb35ef4358c4e67ecf0e4f4382918; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=865494, jitterRate=0.10053355991840363}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:55:53,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c27fb35ef4358c4e67ecf0e4f4382918: 2023-06-08 16:55:53,705 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918., pid=6, masterSystemTime=1686243353676 2023-06-08 16:55:53,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,709 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:55:53,710 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c27fb35ef4358c4e67ecf0e4f4382918, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:53,711 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243353710"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243353710"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243353710"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243353710"}]},"ts":"1686243353710"} 2023-06-08 16:55:53,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 16:55:53,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c27fb35ef4358c4e67ecf0e4f4382918, server=jenkins-hbase20.apache.org,44489,1686243352751 in 192 msec 2023-06-08 16:55:53,721 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 16:55:53,721 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c27fb35ef4358c4e67ecf0e4f4382918, ASSIGN in 357 msec 2023-06-08 16:55:53,722 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:55:53,722 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243353722"}]},"ts":"1686243353722"} 2023-06-08 16:55:53,723 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 16:55:53,726 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:55:53,728 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 411 msec 2023-06-08 16:55:53,818 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 16:55:53,819 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:55:53,820 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:53,832 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 16:55:53,842 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:55:53,846 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-06-08 16:55:53,855 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 16:55:53,864 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:55:53,868 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-06-08 16:55:53,884 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 16:55:53,886 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 16:55:53,886 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.114sec 2023-06-08 16:55:53,886 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 16:55:53,886 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 16:55:53,887 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 16:55:53,887 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35513,1686243352711-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 16:55:53,887 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35513,1686243352711-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 16:55:53,890 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 16:55:53,970 DEBUG [Listener at localhost.localdomain/36827] zookeeper.ReadOnlyZKClient(139): Connect 0x6fdef141 to 127.0.0.1:59477 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:55:53,979 DEBUG [Listener at localhost.localdomain/36827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4910e5c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:55:53,982 DEBUG [hconnection-0x6610357f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:55:53,985 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35494, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:55:53,988 INFO [Listener at localhost.localdomain/36827] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:55:53,988 INFO [Listener at localhost.localdomain/36827] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:55:53,992 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 16:55:53,992 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:55:53,993 INFO [Listener at localhost.localdomain/36827] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 16:55:53,993 INFO [Listener at localhost.localdomain/36827] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-06-08 16:55:53,993 INFO [Listener at localhost.localdomain/36827] wal.TestLogRolling(432): Replication=2 2023-06-08 16:55:53,995 DEBUG [Listener at localhost.localdomain/36827] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 16:55:53,998 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39008, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 16:55:54,000 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 16:55:54,000 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 16:55:54,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:55:54,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-06-08 16:55:54,005 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:55:54,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-06-08 16:55:54,006 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:55:54,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:55:54,009 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,009 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd empty. 2023-06-08 16:55:54,010 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,010 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-06-08 16:55:54,020 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-06-08 16:55:54,021 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => da760d7ea1fe1558eb41ee117a3b04cd, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/.tmp 2023-06-08 16:55:54,029 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:54,029 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing da760d7ea1fe1558eb41ee117a3b04cd, disabling compactions & flushes 2023-06-08 16:55:54,029 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,029 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,029 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. after waiting 0 ms 2023-06-08 16:55:54,029 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,029 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,030 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for da760d7ea1fe1558eb41ee117a3b04cd: 2023-06-08 16:55:54,032 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:55:54,033 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686243354033"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243354033"}]},"ts":"1686243354033"} 2023-06-08 16:55:54,035 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:55:54,036 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:55:54,036 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243354036"}]},"ts":"1686243354036"} 2023-06-08 16:55:54,037 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-06-08 16:55:54,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=da760d7ea1fe1558eb41ee117a3b04cd, ASSIGN}] 2023-06-08 16:55:54,042 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=da760d7ea1fe1558eb41ee117a3b04cd, ASSIGN 2023-06-08 16:55:54,044 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=da760d7ea1fe1558eb41ee117a3b04cd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44489,1686243352751; forceNewPlan=false, retain=false 2023-06-08 16:55:54,196 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=da760d7ea1fe1558eb41ee117a3b04cd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:54,197 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686243354196"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243354196"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243354196"}]},"ts":"1686243354196"} 2023-06-08 16:55:54,203 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure da760d7ea1fe1558eb41ee117a3b04cd, server=jenkins-hbase20.apache.org,44489,1686243352751}] 2023-06-08 16:55:54,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da760d7ea1fe1558eb41ee117a3b04cd, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:55:54,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:55:54,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,372 INFO [StoreOpener-da760d7ea1fe1558eb41ee117a3b04cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,375 DEBUG [StoreOpener-da760d7ea1fe1558eb41ee117a3b04cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd/info 2023-06-08 16:55:54,375 DEBUG [StoreOpener-da760d7ea1fe1558eb41ee117a3b04cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd/info 2023-06-08 16:55:54,376 INFO [StoreOpener-da760d7ea1fe1558eb41ee117a3b04cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da760d7ea1fe1558eb41ee117a3b04cd columnFamilyName info 2023-06-08 16:55:54,377 INFO [StoreOpener-da760d7ea1fe1558eb41ee117a3b04cd-1] regionserver.HStore(310): Store=da760d7ea1fe1558eb41ee117a3b04cd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:55:54,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:55:54,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/data/default/TestLogRolling-testLogRollOnPipelineRestart/da760d7ea1fe1558eb41ee117a3b04cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:55:54,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened da760d7ea1fe1558eb41ee117a3b04cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828058, jitterRate=0.052930623292922974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:55:54,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for da760d7ea1fe1558eb41ee117a3b04cd: 2023-06-08 16:55:54,388 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd., pid=11, masterSystemTime=1686243354358 2023-06-08 16:55:54,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,391 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:55:54,392 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=da760d7ea1fe1558eb41ee117a3b04cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:55:54,392 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1686243354392"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243354392"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243354392"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243354392"}]},"ts":"1686243354392"} 2023-06-08 16:55:54,398 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 16:55:54,398 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure da760d7ea1fe1558eb41ee117a3b04cd, server=jenkins-hbase20.apache.org,44489,1686243352751 in 192 msec 2023-06-08 16:55:54,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 16:55:54,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=da760d7ea1fe1558eb41ee117a3b04cd, ASSIGN in 358 msec 2023-06-08 16:55:54,402 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:55:54,402 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243354402"}]},"ts":"1686243354402"} 2023-06-08 16:55:54,404 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-06-08 16:55:54,406 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:55:54,408 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 406 msec 2023-06-08 16:55:56,736 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 16:55:59,005 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-06-08 16:56:04,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:56:04,011 INFO [Listener at localhost.localdomain/36827] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-06-08 16:56:04,018 DEBUG [Listener at localhost.localdomain/36827] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-06-08 16:56:04,018 DEBUG [Listener at localhost.localdomain/36827] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:06,023 INFO [Listener at localhost.localdomain/36827] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 2023-06-08 16:56:06,024 WARN [Listener at localhost.localdomain/36827] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:56:06,026 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:06,026 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:06,028 WARN [DataStreamer for file /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.meta.1686243353260.meta block BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]) is bad. 2023-06-08 16:56:06,028 WARN [DataStreamer for file /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711/jenkins-hbase20.apache.org%2C35513%2C1686243352711.1686243352829 block BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]) is bad. 2023-06-08 16:56:06,030 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 16:56:06,033 WARN [DataStreamer for file /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 block BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK], DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33675,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]) is bad. 2023-06-08 16:56:06,034 WARN [PacketResponder: BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33675]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,041 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:55902 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44491:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55902 dst: /127.0.0.1:44491 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,042 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:55928 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44491:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55928 dst: /127.0.0.1:44491 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44491 remote=/127.0.0.1:55928]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,042 WARN [PacketResponder: BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44491]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,043 INFO [Listener at localhost.localdomain/36827] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:56:06,045 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_85765142_17 at /127.0.0.1:55878 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44491:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55878 dst: /127.0.0.1:44491 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:44491 remote=/127.0.0.1:55878]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,045 WARN [PacketResponder: BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:44491]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,045 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:40546 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33675:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40546 dst: /127.0.0.1:33675 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,047 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_85765142_17 at /127.0.0.1:40520 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33675:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40520 dst: /127.0.0.1:33675 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,147 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:40540 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33675:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40540 dst: /127.0.0.1:33675 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,154 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:56:06,154 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1367966598-148.251.75.209-1686243352238 (Datanode Uuid c2261922-210e-4eb1-913b-e3161fd72dd2) service to localhost.localdomain/127.0.0.1:41831 2023-06-08 16:56:06,155 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data3/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:06,156 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data4/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:06,163 WARN [Listener at localhost.localdomain/36827] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:56:06,166 WARN [Listener at localhost.localdomain/36827] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:06,167 INFO [Listener at localhost.localdomain/36827] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:06,173 INFO [Listener at localhost.localdomain/36827] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_39787_datanode____.bk6w21/webapp 2023-06-08 16:56:06,246 INFO [Listener at localhost.localdomain/36827] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39787 2023-06-08 16:56:06,253 WARN [Listener at localhost.localdomain/44359] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:06,258 WARN [Listener at localhost.localdomain/44359] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:56:06,258 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:06,258 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:06,258 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:06,262 INFO [Listener at localhost.localdomain/44359] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:56:06,304 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x47f6586edb7a8297: Processing first storage report for DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e from datanode c2261922-210e-4eb1-913b-e3161fd72dd2 2023-06-08 16:56:06,304 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x47f6586edb7a8297: from storage DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e node DatanodeRegistration(127.0.0.1:39331, datanodeUuid=c2261922-210e-4eb1-913b-e3161fd72dd2, infoPort=43323, infoSecurePort=0, ipcPort=44359, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:56:06,304 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x47f6586edb7a8297: Processing first storage report for DS-82443175-e49b-420b-90f5-2e5523b99999 from datanode c2261922-210e-4eb1-913b-e3161fd72dd2 2023-06-08 16:56:06,304 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x47f6586edb7a8297: from storage DS-82443175-e49b-420b-90f5-2e5523b99999 node DatanodeRegistration(127.0.0.1:39331, datanodeUuid=c2261922-210e-4eb1-913b-e3161fd72dd2, infoPort=43323, infoSecurePort=0, ipcPort=44359, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:06,366 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:49962 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:44491:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49962 dst: /127.0.0.1:44491 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,367 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:56:06,366 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_85765142_17 at /127.0.0.1:49946 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:44491:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49946 dst: /127.0.0.1:44491 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,366 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:49944 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:44491:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49944 dst: /127.0.0.1:44491 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:06,368 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1367966598-148.251.75.209-1686243352238 (Datanode Uuid 0c68d5bf-79ac-43fc-a534-69fe8a4918f3) service to localhost.localdomain/127.0.0.1:41831 2023-06-08 16:56:06,370 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:06,371 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data2/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:06,381 WARN [Listener at localhost.localdomain/44359] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:56:06,384 WARN [Listener at localhost.localdomain/44359] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:06,385 INFO [Listener at localhost.localdomain/44359] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:06,391 INFO [Listener at localhost.localdomain/44359] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_41947_datanode____jm2wxy/webapp 2023-06-08 16:56:06,465 INFO [Listener at localhost.localdomain/44359] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41947 2023-06-08 16:56:06,471 WARN [Listener at localhost.localdomain/36085] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:06,521 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x50b26db6a77e5804: Processing first storage report for DS-fd375f55-ae42-4931-984a-d4ab7aaaefff from datanode 0c68d5bf-79ac-43fc-a534-69fe8a4918f3 2023-06-08 16:56:06,522 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x50b26db6a77e5804: from storage DS-fd375f55-ae42-4931-984a-d4ab7aaaefff node DatanodeRegistration(127.0.0.1:45055, datanodeUuid=0c68d5bf-79ac-43fc-a534-69fe8a4918f3, infoPort=38741, infoSecurePort=0, ipcPort=36085, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:06,522 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x50b26db6a77e5804: Processing first storage report for DS-c54a996a-606a-4e49-b881-e534da990fe5 from datanode 0c68d5bf-79ac-43fc-a534-69fe8a4918f3 2023-06-08 16:56:06,522 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x50b26db6a77e5804: from storage DS-c54a996a-606a-4e49-b881-e534da990fe5 node DatanodeRegistration(127.0.0.1:45055, datanodeUuid=0c68d5bf-79ac-43fc-a534-69fe8a4918f3, infoPort=38741, infoSecurePort=0, ipcPort=36085, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:07,476 INFO [Listener at localhost.localdomain/36085] wal.TestLogRolling(481): Data Nodes restarted 2023-06-08 16:56:07,479 INFO [Listener at localhost.localdomain/36085] wal.AbstractTestLogRolling(233): Validated row row1002 2023-06-08 16:56:07,481 WARN [RS:0;jenkins-hbase20:44489.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:07,483 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44489%2C1686243352751:(num 1686243353142) roll requested 2023-06-08 16:56:07,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44489] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:07,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44489] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:35494 deadline: 1686243377480, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-08 16:56:07,490 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 newFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 2023-06-08 16:56:07,490 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-06-08 16:56:07,490 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 2023-06-08 16:56:07,490 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39331,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] 2023-06-08 16:56:07,490 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:07,490 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 is not closed yet, will try archiving it next time 2023-06-08 16:56:07,491 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:19,595 INFO [Listener at localhost.localdomain/36085] wal.AbstractTestLogRolling(233): Validated row row1003 2023-06-08 16:56:21,602 WARN [Listener at localhost.localdomain/36085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:56:21,606 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:45055,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-06-08 16:56:21,607 WARN [DataStreamer for file /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 block BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39331,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:45055,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45055,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]) is bad. 2023-06-08 16:56:21,607 WARN [PacketResponder: BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45055]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:21,609 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:33588 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:39331:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33588 dst: /127.0.0.1:39331 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:21,614 INFO [Listener at localhost.localdomain/36085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:56:21,719 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:37958 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:45055:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37958 dst: /127.0.0.1:45055 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:21,722 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:56:21,722 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1367966598-148.251.75.209-1686243352238 (Datanode Uuid 0c68d5bf-79ac-43fc-a534-69fe8a4918f3) service to localhost.localdomain/127.0.0.1:41831 2023-06-08 16:56:21,723 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:21,724 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data2/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:21,732 WARN [Listener at localhost.localdomain/36085] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:56:21,734 WARN [Listener at localhost.localdomain/36085] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:21,736 INFO [Listener at localhost.localdomain/36085] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:21,743 INFO [Listener at localhost.localdomain/36085] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_45415_datanode____.x02ixu/webapp 2023-06-08 16:56:21,814 INFO [Listener at localhost.localdomain/36085] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45415 2023-06-08 16:56:21,822 WARN [Listener at localhost.localdomain/46855] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:21,824 WARN [Listener at localhost.localdomain/46855] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:56:21,825 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:21,828 INFO [Listener at localhost.localdomain/46855] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:56:21,877 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9beda4cf5c8d192a: Processing first storage report for DS-fd375f55-ae42-4931-984a-d4ab7aaaefff from datanode 0c68d5bf-79ac-43fc-a534-69fe8a4918f3 2023-06-08 16:56:21,877 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9beda4cf5c8d192a: from storage DS-fd375f55-ae42-4931-984a-d4ab7aaaefff node DatanodeRegistration(127.0.0.1:45535, datanodeUuid=0c68d5bf-79ac-43fc-a534-69fe8a4918f3, infoPort=38195, infoSecurePort=0, ipcPort=46855, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:56:21,877 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9beda4cf5c8d192a: Processing first storage report for DS-c54a996a-606a-4e49-b881-e534da990fe5 from datanode 0c68d5bf-79ac-43fc-a534-69fe8a4918f3 2023-06-08 16:56:21,877 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9beda4cf5c8d192a: from storage DS-c54a996a-606a-4e49-b881-e534da990fe5 node DatanodeRegistration(127.0.0.1:45535, datanodeUuid=0c68d5bf-79ac-43fc-a534-69fe8a4918f3, infoPort=38195, infoSecurePort=0, ipcPort=46855, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:21,931 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-68305317_17 at /127.0.0.1:34648 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:39331:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34648 dst: /127.0.0.1:39331 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:21,934 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:56:21,934 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1367966598-148.251.75.209-1686243352238 (Datanode Uuid c2261922-210e-4eb1-913b-e3161fd72dd2) service to localhost.localdomain/127.0.0.1:41831 2023-06-08 16:56:21,935 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data3/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:21,935 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data4/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:21,943 WARN [Listener at localhost.localdomain/46855] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:56:21,945 WARN [Listener at localhost.localdomain/46855] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:21,947 INFO [Listener at localhost.localdomain/46855] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:21,953 INFO [Listener at localhost.localdomain/46855] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/java.io.tmpdir/Jetty_localhost_32855_datanode____.yssdq8/webapp 2023-06-08 16:56:22,026 INFO [Listener at localhost.localdomain/46855] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32855 2023-06-08 16:56:22,033 WARN [Listener at localhost.localdomain/44557] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:22,081 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x60de86dcaa937ad0: Processing first storage report for DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e from datanode c2261922-210e-4eb1-913b-e3161fd72dd2 2023-06-08 16:56:22,081 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x60de86dcaa937ad0: from storage DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e node DatanodeRegistration(127.0.0.1:43209, datanodeUuid=c2261922-210e-4eb1-913b-e3161fd72dd2, infoPort=44465, infoSecurePort=0, ipcPort=44557, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:56:22,081 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x60de86dcaa937ad0: Processing first storage report for DS-82443175-e49b-420b-90f5-2e5523b99999 from datanode c2261922-210e-4eb1-913b-e3161fd72dd2 2023-06-08 16:56:22,081 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x60de86dcaa937ad0: from storage DS-82443175-e49b-420b-90f5-2e5523b99999 node DatanodeRegistration(127.0.0.1:43209, datanodeUuid=c2261922-210e-4eb1-913b-e3161fd72dd2, infoPort=44465, infoSecurePort=0, ipcPort=44557, storageInfo=lv=-57;cid=testClusterID;nsid=746946676;c=1686243352238), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:22,897 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:22,898 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C35513%2C1686243352711:(num 1686243352829) roll requested 2023-06-08 16:56:22,898 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:22,899 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:22,910 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-06-08 16:56:22,910 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711/jenkins-hbase20.apache.org%2C35513%2C1686243352711.1686243352829 with entries=88, filesize=43.82 KB; new WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711/jenkins-hbase20.apache.org%2C35513%2C1686243352711.1686243382898 2023-06-08 16:56:22,910 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45535,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK], DatanodeInfoWithStorage[127.0.0.1:43209,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] 2023-06-08 16:56:22,910 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711/jenkins-hbase20.apache.org%2C35513%2C1686243352711.1686243352829 is not closed yet, will try archiving it next time 2023-06-08 16:56:22,910 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:22,911 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711/jenkins-hbase20.apache.org%2C35513%2C1686243352711.1686243352829; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:23,038 INFO [Listener at localhost.localdomain/44557] wal.TestLogRolling(498): Data Nodes restarted 2023-06-08 16:56:23,042 INFO [Listener at localhost.localdomain/44557] wal.AbstractTestLogRolling(233): Validated row row1004 2023-06-08 16:56:23,044 WARN [RS:0;jenkins-hbase20:44489.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39331,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:23,045 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44489%2C1686243352751:(num 1686243367483) roll requested 2023-06-08 16:56:23,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44489] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39331,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:23,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44489] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:35494 deadline: 1686243393044, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-08 16:56:23,060 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 newFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 2023-06-08 16:56:23,060 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-06-08 16:56:23,061 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 2023-06-08 16:56:23,061 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45535,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK], DatanodeInfoWithStorage[127.0.0.1:43209,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] 2023-06-08 16:56:23,061 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39331,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:23,061 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 is not closed yet, will try archiving it next time 2023-06-08 16:56:23,061 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39331,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:35,099 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 newFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 2023-06-08 16:56:35,100 INFO [Listener at localhost.localdomain/44557] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 2023-06-08 16:56:35,107 DEBUG [Listener at localhost.localdomain/44557] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43209,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:45535,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] 2023-06-08 16:56:35,107 DEBUG [Listener at localhost.localdomain/44557] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 is not closed yet, will try archiving it next time 2023-06-08 16:56:35,107 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 2023-06-08 16:56:35,109 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 2023-06-08 16:56:35,112 WARN [IPC Server handler 4 on default port 41831] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1016 2023-06-08 16:56:35,114 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 after 5ms 2023-06-08 16:56:36,113 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e9bbc7] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1367966598-148.251.75.209-1686243352238:blk_1073741832_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:43209,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data4/current/BP-1367966598-148.251.75.209-1686243352238/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:39,116 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 after 4007ms 2023-06-08 16:56:39,116 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243353142 2023-06-08 16:56:39,133 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686243353702/Put/vlen=176/seqid=0] 2023-06-08 16:56:39,133 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #4: [default/info:d/1686243353838/Put/vlen=9/seqid=0] 2023-06-08 16:56:39,133 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #5: [hbase/info:d/1686243353861/Put/vlen=7/seqid=0] 2023-06-08 16:56:39,134 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1686243354387/Put/vlen=232/seqid=0] 2023-06-08 16:56:39,134 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #4: [row1002/info:/1686243364021/Put/vlen=1045/seqid=0] 2023-06-08 16:56:39,134 DEBUG [Listener at localhost.localdomain/44557] wal.ProtobufLogReader(420): EOF at position 2162 2023-06-08 16:56:39,134 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 2023-06-08 16:56:39,134 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 2023-06-08 16:56:39,135 WARN [IPC Server handler 3 on default port 41831] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-06-08 16:56:39,135 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 after 1ms 2023-06-08 16:56:40,086 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@5d248951] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1367966598-148.251.75.209-1686243352238:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:45535,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current/BP-1367966598-148.251.75.209-1686243352238/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current/BP-1367966598-148.251.75.209-1686243352238/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-06-08 16:56:43,136 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 after 4002ms 2023-06-08 16:56:43,136 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243367483 2023-06-08 16:56:43,142 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #6: [row1003/info:/1686243377585/Put/vlen=1045/seqid=0] 2023-06-08 16:56:43,142 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #7: [row1004/info:/1686243379597/Put/vlen=1045/seqid=0] 2023-06-08 16:56:43,143 DEBUG [Listener at localhost.localdomain/44557] wal.ProtobufLogReader(420): EOF at position 2425 2023-06-08 16:56:43,143 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 2023-06-08 16:56:43,143 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 2023-06-08 16:56:43,144 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 after 1ms 2023-06-08 16:56:43,144 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243383046 2023-06-08 16:56:43,151 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(522): #9: [row1005/info:/1686243393084/Put/vlen=1045/seqid=0] 2023-06-08 16:56:43,151 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 2023-06-08 16:56:43,151 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 2023-06-08 16:56:43,152 WARN [IPC Server handler 0 on default port 41831] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-06-08 16:56:43,152 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 after 1ms 2023-06-08 16:56:44,087 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_85765142_17 at /127.0.0.1:60350 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:43209:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60350 dst: /127.0.0.1:43209 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43209 remote=/127.0.0.1:60350]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:44,090 WARN [ResponseProcessor for block BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-06-08 16:56:44,091 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_85765142_17 at /127.0.0.1:53826 [Receiving block BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:45535:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53826 dst: /127.0.0.1:45535 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:44,092 WARN [DataStreamer for file /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 block BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43209,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK], DatanodeInfoWithStorage[127.0.0.1:45535,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43209,DS-7c58c2f0-66a7-4784-8e63-03fefb7fbb9e,DISK]) is bad. 2023-06-08 16:56:44,099 WARN [DataStreamer for file /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 block BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,154 INFO [Listener at localhost.localdomain/44557] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 after 4003ms 2023-06-08 16:56:47,154 DEBUG [Listener at localhost.localdomain/44557] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 2023-06-08 16:56:47,162 DEBUG [Listener at localhost.localdomain/44557] wal.ProtobufLogReader(420): EOF at position 83 2023-06-08 16:56:47,163 INFO [Listener at localhost.localdomain/44557] regionserver.HRegion(2745): Flushing c27fb35ef4358c4e67ecf0e4f4382918 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 16:56:47,165 WARN [RS:0;jenkins-hbase20:44489.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,165 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44489%2C1686243352751:(num 1686243395086) roll requested 2023-06-08 16:56:47,165 DEBUG [Listener at localhost.localdomain/44557] regionserver.HRegion(2446): Flush status journal for c27fb35ef4358c4e67ecf0e4f4382918: 2023-06-08 16:56:47,165 INFO [Listener at localhost.localdomain/44557] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,167 INFO [Listener at localhost.localdomain/44557] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-06-08 16:56:47,168 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,168 DEBUG [Listener at localhost.localdomain/44557] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-08 16:56:47,168 INFO [Listener at localhost.localdomain/44557] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,170 INFO [Listener at localhost.localdomain/44557] regionserver.HRegion(2745): Flushing da760d7ea1fe1558eb41ee117a3b04cd 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-06-08 16:56:47,170 DEBUG [Listener at localhost.localdomain/44557] regionserver.HRegion(2446): Flush status journal for da760d7ea1fe1558eb41ee117a3b04cd: 2023-06-08 16:56:47,170 INFO [Listener at localhost.localdomain/44557] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,173 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 16:56:47,173 INFO [Listener at localhost.localdomain/44557] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 16:56:47,173 DEBUG [Listener at localhost.localdomain/44557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fdef141 to 127.0.0.1:59477 2023-06-08 16:56:47,173 DEBUG [Listener at localhost.localdomain/44557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:56:47,174 DEBUG [Listener at localhost.localdomain/44557] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 16:56:47,174 DEBUG [Listener at localhost.localdomain/44557] util.JVMClusterUtil(257): Found active master hash=1152006687, stopped=false 2023-06-08 16:56:47,174 INFO [Listener at localhost.localdomain/44557] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:56:47,178 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:56:47,178 INFO [Listener at localhost.localdomain/44557] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 16:56:47,178 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:56:47,178 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:47,179 DEBUG [Listener at localhost.localdomain/44557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75220940 to 127.0.0.1:59477 2023-06-08 16:56:47,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:56:47,179 DEBUG [Listener at localhost.localdomain/44557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:56:47,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:56:47,180 INFO [Listener at localhost.localdomain/44557] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44489,1686243352751' ***** 2023-06-08 16:56:47,180 INFO [Listener at localhost.localdomain/44557] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:56:47,180 INFO [RS:0;jenkins-hbase20:44489] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:56:47,180 INFO [RS:0;jenkins-hbase20:44489] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:56:47,180 INFO [RS:0;jenkins-hbase20:44489] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:56:47,180 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:56:47,180 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(3303): Received CLOSE for c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:56:47,180 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 newFile=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243407165 2023-06-08 16:56:47,181 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(3303): Received CLOSE for da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:56:47,181 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243407165 2023-06-08 16:56:47,181 DEBUG [RS:0;jenkins-hbase20:44489] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x07a38e45 to 127.0.0.1:59477 2023-06-08 16:56:47,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c27fb35ef4358c4e67ecf0e4f4382918, disabling compactions & flushes 2023-06-08 16:56:47,181 DEBUG [RS:0;jenkins-hbase20:44489] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:56:47,181 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:56:47,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:56:47,181 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086 failed. Cause="Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-06-08 16:56:47,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,181 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:56:47,181 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-08 16:56:47,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. after waiting 0 ms 2023-06-08 16:56:47,182 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1478): Online Regions={c27fb35ef4358c4e67ecf0e4f4382918=hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918., 1588230740=hbase:meta,,1.1588230740, da760d7ea1fe1558eb41ee117a3b04cd=TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd.} 2023-06-08 16:56:47,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:56:47,182 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751/jenkins-hbase20.apache.org%2C44489%2C1686243352751.1686243395086, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1367966598-148.251.75.209-1686243352238:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,182 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1504): Waiting on 1588230740, c27fb35ef4358c4e67ecf0e4f4382918, da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:56:47,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:56:47,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:56:47,182 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c27fb35ef4358c4e67ecf0e4f4382918 1/1 column families, dataSize=78 B heapSize=728 B 2023-06-08 16:56:47,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:56:47,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.95 KB 2023-06-08 16:56:47,182 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-06-08 16:56:47,182 WARN [RS:0;jenkins-hbase20:44489.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=9, requesting roll of WAL java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.doFlush(CodedOutputStream.java:3041) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.flushIfNotAvailable(CodedOutputStream.java:3036) at org.apache.hbase.thirdparty.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeUInt64(CodedOutputStream.java:2726) at org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos$WALKey.writeTo(WALProtos.java:1878) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:95) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:55) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:329) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doAppend(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendEntry(AbstractFSWAL.java:1105) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1199) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:47,183 WARN [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = hbase:meta,,1.1588230740 at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region hbase:meta,,1.1588230740 2023-06-08 16:56:47,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:56:47,183 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,44489,1686243352751: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** java.io.IOException: Cannot append; log is closed, regionName = hbase:meta,,1.1588230740 at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:47,183 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-06-08 16:56:47,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c27fb35ef4358c4e67ecf0e4f4382918: 2023-06-08 16:56:47,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-06-08 16:56:47,183 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:56:47,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,184 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing da760d7ea1fe1558eb41ee117a3b04cd, disabling compactions & flushes 2023-06-08 16:56:47,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,184 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:44491,DS-fd375f55-ae42-4931-984a-d4ab7aaaefff,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-06-08 16:56:47,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. after waiting 0 ms 2023-06-08 16:56:47,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for da760d7ea1fe1558eb41ee117a3b04cd: 2023-06-08 16:56:47,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,185 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/WALs/jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:56:47,185 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:47,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-06-08 16:56:47,186 DEBUG [regionserver/jenkins-hbase20:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-06-08 16:56:47,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-06-08 16:56:47,186 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44489%2C1686243352751.meta:.meta(num 1686243353260) roll requested 2023-06-08 16:56:47,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-06-08 16:56:47,186 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-06-08 16:56:47,186 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1093140480, "init": 524288000, "max": 2051014656, "used": 369436400 }, "NonHeapMemoryUsage": { "committed": 139026432, "init": 2555904, "max": -1, "used": 136454544 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-06-08 16:56:47,186 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44489%2C1686243352751:(num 1686243407165) roll requested 2023-06-08 16:56:47,186 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-06-08 16:56:47,186 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35513] master.MasterRpcServices(609): jenkins-hbase20.apache.org,44489,1686243352751 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,44489,1686243352751: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: java.io.IOException: Cannot append; log is closed, regionName = hbase:meta,,1.1588230740 at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-06-08 16:56:47,382 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(3303): Received CLOSE for c27fb35ef4358c4e67ecf0e4f4382918 2023-06-08 16:56:47,383 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:56:47,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c27fb35ef4358c4e67ecf0e4f4382918, disabling compactions & flushes 2023-06-08 16:56:47,383 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(3303): Received CLOSE for da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:56:47,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:56:47,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. after waiting 0 ms 2023-06-08 16:56:47,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,384 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:56:47,383 DEBUG [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1504): Waiting on 1588230740, c27fb35ef4358c4e67ecf0e4f4382918, da760d7ea1fe1558eb41ee117a3b04cd 2023-06-08 16:56:47,384 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:56:47,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:56:47,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:56:47,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c27fb35ef4358c4e67ecf0e4f4382918: 2023-06-08 16:56:47,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:56:47,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1686243353315.c27fb35ef4358c4e67ecf0e4f4382918. 2023-06-08 16:56:47,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-06-08 16:56:47,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing da760d7ea1fe1558eb41ee117a3b04cd, disabling compactions & flushes 2023-06-08 16:56:47,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. after waiting 0 ms 2023-06-08 16:56:47,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for da760d7ea1fe1558eb41ee117a3b04cd: 2023-06-08 16:56:47,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1686243354000.da760d7ea1fe1558eb41ee117a3b04cd. 2023-06-08 16:56:47,585 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-06-08 16:56:47,585 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44489,1686243352751; all regions closed. 2023-06-08 16:56:47,585 DEBUG [RS:0;jenkins-hbase20:44489] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:56:47,585 INFO [RS:0;jenkins-hbase20:44489] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:56:47,586 INFO [RS:0;jenkins-hbase20:44489] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-08 16:56:47,587 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:56:47,589 INFO [RS:0;jenkins-hbase20:44489] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44489 2023-06-08 16:56:47,595 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44489,1686243352751 2023-06-08 16:56:47,595 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:56:47,595 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:56:47,596 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44489,1686243352751] 2023-06-08 16:56:47,596 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44489,1686243352751; numProcessing=1 2023-06-08 16:56:47,598 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44489,1686243352751 already deleted, retry=false 2023-06-08 16:56:47,598 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44489,1686243352751 expired; onlineServers=0 2023-06-08 16:56:47,598 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,35513,1686243352711' ***** 2023-06-08 16:56:47,598 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 16:56:47,598 DEBUG [M:0;jenkins-hbase20:35513] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4095a652, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:56:47,598 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:56:47,598 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,35513,1686243352711; all regions closed. 2023-06-08 16:56:47,598 DEBUG [M:0;jenkins-hbase20:35513] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:56:47,598 DEBUG [M:0;jenkins-hbase20:35513] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 16:56:47,598 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 16:56:47,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243352899] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243352899,5,FailOnTimeoutGroup] 2023-06-08 16:56:47,598 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243352902] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243352902,5,FailOnTimeoutGroup] 2023-06-08 16:56:47,598 DEBUG [M:0;jenkins-hbase20:35513] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 16:56:47,601 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 16:56:47,600 INFO [M:0;jenkins-hbase20:35513] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 16:56:47,601 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:47,601 INFO [M:0;jenkins-hbase20:35513] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 16:56:47,601 INFO [M:0;jenkins-hbase20:35513] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-08 16:56:47,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:56:47,601 DEBUG [M:0;jenkins-hbase20:35513] master.HMaster(1512): Stopping service threads 2023-06-08 16:56:47,601 INFO [M:0;jenkins-hbase20:35513] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 16:56:47,602 ERROR [M:0;jenkins-hbase20:35513] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 16:56:47,602 INFO [M:0;jenkins-hbase20:35513] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 16:56:47,602 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 16:56:47,602 DEBUG [M:0;jenkins-hbase20:35513] zookeeper.ZKUtil(398): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 16:56:47,603 WARN [M:0;jenkins-hbase20:35513] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 16:56:47,603 INFO [M:0;jenkins-hbase20:35513] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 16:56:47,603 INFO [M:0;jenkins-hbase20:35513] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 16:56:47,603 DEBUG [M:0;jenkins-hbase20:35513] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:56:47,603 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:47,604 DEBUG [M:0;jenkins-hbase20:35513] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:47,604 DEBUG [M:0;jenkins-hbase20:35513] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:56:47,604 DEBUG [M:0;jenkins-hbase20:35513] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:47,604 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.20 KB heapSize=45.83 KB 2023-06-08 16:56:47,619 INFO [M:0;jenkins-hbase20:35513] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.20 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b6611cf9ce564bdd9eef67308ba9c0e8 2023-06-08 16:56:47,624 DEBUG [M:0;jenkins-hbase20:35513] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b6611cf9ce564bdd9eef67308ba9c0e8 as hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b6611cf9ce564bdd9eef67308ba9c0e8 2023-06-08 16:56:47,629 INFO [M:0;jenkins-hbase20:35513] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41831/user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b6611cf9ce564bdd9eef67308ba9c0e8, entries=11, sequenceid=92, filesize=7.0 K 2023-06-08 16:56:47,630 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegion(2948): Finished flush of dataSize ~38.20 KB/39113, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=92, compaction requested=false 2023-06-08 16:56:47,632 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:47,632 DEBUG [M:0;jenkins-hbase20:35513] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:56:47,632 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/32b4a022-b195-80cf-64c1-8dec7f0729e5/MasterData/WALs/jenkins-hbase20.apache.org,35513,1686243352711 2023-06-08 16:56:47,635 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:56:47,635 INFO [M:0;jenkins-hbase20:35513] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 16:56:47,636 INFO [M:0;jenkins-hbase20:35513] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:35513 2023-06-08 16:56:47,638 DEBUG [M:0;jenkins-hbase20:35513] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,35513,1686243352711 already deleted, retry=false 2023-06-08 16:56:47,697 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:56:47,697 INFO [RS:0;jenkins-hbase20:44489] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44489,1686243352751; zookeeper connection closed. 2023-06-08 16:56:47,697 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): regionserver:44489-0x101cba5b89d0001, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:56:47,699 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@36672fd7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@36672fd7 2023-06-08 16:56:47,706 INFO [Listener at localhost.localdomain/44557] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 16:56:47,798 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:56:47,798 DEBUG [Listener at localhost.localdomain/36827-EventThread] zookeeper.ZKWatcher(600): master:35513-0x101cba5b89d0000, quorum=127.0.0.1:59477, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:56:47,798 INFO [M:0;jenkins-hbase20:35513] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,35513,1686243352711; zookeeper connection closed. 2023-06-08 16:56:47,801 WARN [Listener at localhost.localdomain/44557] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:56:47,809 INFO [Listener at localhost.localdomain/44557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:56:47,918 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:56:47,918 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1367966598-148.251.75.209-1686243352238 (Datanode Uuid c2261922-210e-4eb1-913b-e3161fd72dd2) service to localhost.localdomain/127.0.0.1:41831 2023-06-08 16:56:47,919 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data3/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:47,920 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data4/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:47,922 WARN [Listener at localhost.localdomain/44557] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:56:47,927 INFO [Listener at localhost.localdomain/44557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:56:48,035 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:56:48,036 WARN [BP-1367966598-148.251.75.209-1686243352238 heartbeating to localhost.localdomain/127.0.0.1:41831] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1367966598-148.251.75.209-1686243352238 (Datanode Uuid 0c68d5bf-79ac-43fc-a534-69fe8a4918f3) service to localhost.localdomain/127.0.0.1:41831 2023-06-08 16:56:48,037 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data1/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:48,038 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/cluster_a9c3a89b-cd23-0f0f-5e7f-ddf244302ee3/dfs/data/data2/current/BP-1367966598-148.251.75.209-1686243352238] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:56:48,050 INFO [Listener at localhost.localdomain/44557] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 16:56:48,164 INFO [Listener at localhost.localdomain/44557] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 16:56:48,178 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 16:56:48,188 INFO [Listener at localhost.localdomain/44557] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 78) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41831 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41831 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/44557 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41831 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (305059913) connection to localhost.localdomain/127.0.0.1:41831 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:41831 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=460 (was 459) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=25 (was 53), ProcessCount=189 (was 187) - ProcessCount LEAK? -, AvailableMemoryMB=1946 (was 2104) 2023-06-08 16:56:48,196 INFO [Listener at localhost.localdomain/44557] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=25, ProcessCount=189, AvailableMemoryMB=1946 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/hadoop.log.dir so I do NOT create it in target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c2c796aa-9f77-a1ec-9774-f97ca98ce7ed/hadoop.tmp.dir so I do NOT create it in target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06, deleteOnExit=true 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/test.cache.data in system properties and HBase conf 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/hadoop.log.dir in system properties and HBase conf 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 16:56:48,197 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 16:56:48,198 DEBUG [Listener at localhost.localdomain/44557] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 16:56:48,198 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/nfs.dump.dir in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/java.io.tmpdir in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 16:56:48,199 INFO [Listener at localhost.localdomain/44557] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 16:56:48,200 WARN [Listener at localhost.localdomain/44557] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:56:48,201 WARN [Listener at localhost.localdomain/44557] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:56:48,202 WARN [Listener at localhost.localdomain/44557] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:56:48,224 WARN [Listener at localhost.localdomain/44557] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:48,226 INFO [Listener at localhost.localdomain/44557] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:48,230 INFO [Listener at localhost.localdomain/44557] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/java.io.tmpdir/Jetty_localhost_localdomain_43159_hdfs____loro28/webapp 2023-06-08 16:56:48,301 INFO [Listener at localhost.localdomain/44557] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43159 2023-06-08 16:56:48,303 WARN [Listener at localhost.localdomain/44557] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:56:48,304 WARN [Listener at localhost.localdomain/44557] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:56:48,304 WARN [Listener at localhost.localdomain/44557] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:56:48,330 WARN [Listener at localhost.localdomain/38111] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:48,340 WARN [Listener at localhost.localdomain/38111] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:56:48,342 WARN [Listener at localhost.localdomain/38111] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:48,343 INFO [Listener at localhost.localdomain/38111] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:48,348 INFO [Listener at localhost.localdomain/38111] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/java.io.tmpdir/Jetty_localhost_46735_datanode____gmneke/webapp 2023-06-08 16:56:48,421 INFO [Listener at localhost.localdomain/38111] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46735 2023-06-08 16:56:48,427 WARN [Listener at localhost.localdomain/38927] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:48,436 WARN [Listener at localhost.localdomain/38927] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:56:48,437 WARN [Listener at localhost.localdomain/38927] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:56:48,438 INFO [Listener at localhost.localdomain/38927] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:56:48,441 INFO [Listener at localhost.localdomain/38927] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/java.io.tmpdir/Jetty_localhost_45681_datanode____.m96xzf/webapp 2023-06-08 16:56:48,519 INFO [Listener at localhost.localdomain/38927] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45681 2023-06-08 16:56:48,524 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd179794eca546f17: Processing first storage report for DS-5a04251b-2846-401f-926a-a7778ccbf5e2 from datanode 9e54f55c-5189-4ba8-9577-6564ad93a8b9 2023-06-08 16:56:48,524 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd179794eca546f17: from storage DS-5a04251b-2846-401f-926a-a7778ccbf5e2 node DatanodeRegistration(127.0.0.1:35299, datanodeUuid=9e54f55c-5189-4ba8-9577-6564ad93a8b9, infoPort=35593, infoSecurePort=0, ipcPort=38927, storageInfo=lv=-57;cid=testClusterID;nsid=5500364;c=1686243408203), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:48,524 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd179794eca546f17: Processing first storage report for DS-b56f359b-0b23-4ce5-bfc2-30053d7822ac from datanode 9e54f55c-5189-4ba8-9577-6564ad93a8b9 2023-06-08 16:56:48,524 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd179794eca546f17: from storage DS-b56f359b-0b23-4ce5-bfc2-30053d7822ac node DatanodeRegistration(127.0.0.1:35299, datanodeUuid=9e54f55c-5189-4ba8-9577-6564ad93a8b9, infoPort=35593, infoSecurePort=0, ipcPort=38927, storageInfo=lv=-57;cid=testClusterID;nsid=5500364;c=1686243408203), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:48,526 WARN [Listener at localhost.localdomain/45823] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:56:48,623 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xee93d874d98a9b25: Processing first storage report for DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83 from datanode f17876ac-b581-4479-8a5b-557a2aed8db5 2023-06-08 16:56:48,623 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xee93d874d98a9b25: from storage DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83 node DatanodeRegistration(127.0.0.1:44395, datanodeUuid=f17876ac-b581-4479-8a5b-557a2aed8db5, infoPort=46457, infoSecurePort=0, ipcPort=45823, storageInfo=lv=-57;cid=testClusterID;nsid=5500364;c=1686243408203), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:56:48,623 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xee93d874d98a9b25: Processing first storage report for DS-99a07847-b8c2-4302-aea9-8a0515008966 from datanode f17876ac-b581-4479-8a5b-557a2aed8db5 2023-06-08 16:56:48,623 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xee93d874d98a9b25: from storage DS-99a07847-b8c2-4302-aea9-8a0515008966 node DatanodeRegistration(127.0.0.1:44395, datanodeUuid=f17876ac-b581-4479-8a5b-557a2aed8db5, infoPort=46457, infoSecurePort=0, ipcPort=45823, storageInfo=lv=-57;cid=testClusterID;nsid=5500364;c=1686243408203), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:56:48,633 DEBUG [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046 2023-06-08 16:56:48,635 INFO [Listener at localhost.localdomain/45823] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/zookeeper_0, clientPort=62635, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 16:56:48,637 INFO [Listener at localhost.localdomain/45823] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62635 2023-06-08 16:56:48,637 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,638 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,655 INFO [Listener at localhost.localdomain/45823] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2 with version=8 2023-06-08 16:56:48,655 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase-staging 2023-06-08 16:56:48,657 INFO [Listener at localhost.localdomain/45823] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:56:48,657 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:56:48,657 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:56:48,657 INFO [Listener at localhost.localdomain/45823] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:56:48,657 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:56:48,657 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:56:48,658 INFO [Listener at localhost.localdomain/45823] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:56:48,659 INFO [Listener at localhost.localdomain/45823] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44053 2023-06-08 16:56:48,660 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,660 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,661 INFO [Listener at localhost.localdomain/45823] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44053 connecting to ZooKeeper ensemble=127.0.0.1:62635 2023-06-08 16:56:48,666 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:440530x0, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:56:48,666 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44053-0x101cba693250000 connected 2023-06-08 16:56:48,677 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:56:48,678 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:56:48,678 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:56:48,680 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44053 2023-06-08 16:56:48,680 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44053 2023-06-08 16:56:48,680 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44053 2023-06-08 16:56:48,680 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44053 2023-06-08 16:56:48,681 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44053 2023-06-08 16:56:48,681 INFO [Listener at localhost.localdomain/45823] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2, hbase.cluster.distributed=false 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:56:48,691 INFO [Listener at localhost.localdomain/45823] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:56:48,692 INFO [Listener at localhost.localdomain/45823] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39379 2023-06-08 16:56:48,693 INFO [Listener at localhost.localdomain/45823] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:56:48,694 DEBUG [Listener at localhost.localdomain/45823] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:56:48,694 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,695 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,696 INFO [Listener at localhost.localdomain/45823] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39379 connecting to ZooKeeper ensemble=127.0.0.1:62635 2023-06-08 16:56:48,707 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:393790x0, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:56:48,708 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ZKUtil(164): regionserver:393790x0, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:56:48,708 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39379-0x101cba693250001 connected 2023-06-08 16:56:48,709 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:56:48,709 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:56:48,709 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39379 2023-06-08 16:56:48,710 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39379 2023-06-08 16:56:48,710 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39379 2023-06-08 16:56:48,710 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39379 2023-06-08 16:56:48,710 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39379 2023-06-08 16:56:48,711 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,724 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:56:48,725 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,726 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:56:48,726 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:56:48,726 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:48,727 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:56:48,728 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:56:48,729 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44053,1686243408656 from backup master directory 2023-06-08 16:56:48,730 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,730 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:56:48,730 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:56:48,730 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,746 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/hbase.id with ID: 34253fd5-b3ed-458d-9afe-a2409d751dca 2023-06-08 16:56:48,756 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:48,758 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:48,767 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x002a30d0 to 127.0.0.1:62635 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:56:48,775 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73d965d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:56:48,775 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:56:48,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 16:56:48,778 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:56:48,780 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store-tmp 2023-06-08 16:56:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:56:48,792 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:56:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:48,792 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:56:48,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:56:48,793 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/WALs/jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,797 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44053%2C1686243408656, suffix=, logDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/WALs/jenkins-hbase20.apache.org,44053,1686243408656, archiveDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/oldWALs, maxLogs=10 2023-06-08 16:56:48,806 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/WALs/jenkins-hbase20.apache.org,44053,1686243408656/jenkins-hbase20.apache.org%2C44053%2C1686243408656.1686243408798 2023-06-08 16:56:48,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35299,DS-5a04251b-2846-401f-926a-a7778ccbf5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:44395,DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83,DISK]] 2023-06-08 16:56:48,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:56:48,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:48,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:56:48,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:56:48,808 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:56:48,810 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 16:56:48,811 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 16:56:48,811 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:48,812 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:56:48,812 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:56:48,816 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:56:48,819 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:56:48,820 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=692854, jitterRate=-0.11899086833000183}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:56:48,820 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:56:48,820 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 16:56:48,822 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 16:56:48,822 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 16:56:48,822 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 16:56:48,822 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 16:56:48,823 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 16:56:48,823 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 16:56:48,826 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 16:56:48,827 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 16:56:48,835 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 16:56:48,835 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 16:56:48,836 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 16:56:48,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 16:56:48,837 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 16:56:48,838 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:48,839 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 16:56:48,839 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 16:56:48,840 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 16:56:48,840 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:56:48,840 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:56:48,840 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:48,841 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44053,1686243408656, sessionid=0x101cba693250000, setting cluster-up flag (Was=false) 2023-06-08 16:56:48,843 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:48,845 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 16:56:48,846 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,848 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:48,850 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 16:56:48,851 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:48,852 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.hbase-snapshot/.tmp 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:56:48,854 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,858 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686243438858 2023-06-08 16:56:48,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 16:56:48,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 16:56:48,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 16:56:48,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 16:56:48,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 16:56:48,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 16:56:48,862 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,862 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:56:48,862 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 16:56:48,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 16:56:48,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 16:56:48,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 16:56:48,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 16:56:48,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 16:56:48,864 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:56:48,864 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243408863,5,FailOnTimeoutGroup] 2023-06-08 16:56:48,864 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243408864,5,FailOnTimeoutGroup] 2023-06-08 16:56:48,864 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,864 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 16:56:48,864 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,880 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:56:48,881 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:56:48,881 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2 2023-06-08 16:56:48,889 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:48,890 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:56:48,891 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/info 2023-06-08 16:56:48,892 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:56:48,892 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:48,892 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:56:48,893 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:56:48,894 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:56:48,894 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:48,894 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:56:48,896 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/table 2023-06-08 16:56:48,896 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:56:48,896 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:48,897 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740 2023-06-08 16:56:48,898 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740 2023-06-08 16:56:48,901 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:56:48,902 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:56:48,904 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:56:48,905 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=876583, jitterRate=0.11463403701782227}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:56:48,905 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:56:48,905 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:56:48,905 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:56:48,905 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:56:48,905 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:56:48,905 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:56:48,905 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:56:48,905 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:56:48,906 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:56:48,906 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 16:56:48,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 16:56:48,908 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 16:56:48,909 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 16:56:48,912 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(951): ClusterId : 34253fd5-b3ed-458d-9afe-a2409d751dca 2023-06-08 16:56:48,913 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:56:48,914 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:56:48,914 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:56:48,916 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:56:48,917 DEBUG [RS:0;jenkins-hbase20:39379] zookeeper.ReadOnlyZKClient(139): Connect 0x3cf1956f to 127.0.0.1:62635 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:56:48,920 DEBUG [RS:0;jenkins-hbase20:39379] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32367c81, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:56:48,920 DEBUG [RS:0;jenkins-hbase20:39379] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3609fe1f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:56:48,929 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:39379 2023-06-08 16:56:48,929 INFO [RS:0;jenkins-hbase20:39379] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:56:48,929 INFO [RS:0;jenkins-hbase20:39379] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:56:48,929 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:56:48,930 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44053,1686243408656 with isa=jenkins-hbase20.apache.org/148.251.75.209:39379, startcode=1686243408690 2023-06-08 16:56:48,930 DEBUG [RS:0;jenkins-hbase20:39379] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:56:48,934 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34423, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:56:48,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:48,936 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2 2023-06-08 16:56:48,936 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38111 2023-06-08 16:56:48,936 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:56:48,937 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:56:48,938 DEBUG [RS:0;jenkins-hbase20:39379] zookeeper.ZKUtil(162): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:48,938 WARN [RS:0;jenkins-hbase20:39379] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:56:48,938 INFO [RS:0;jenkins-hbase20:39379] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:56:48,938 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:48,938 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,39379,1686243408690] 2023-06-08 16:56:48,942 DEBUG [RS:0;jenkins-hbase20:39379] zookeeper.ZKUtil(162): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:48,943 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:56:48,943 INFO [RS:0;jenkins-hbase20:39379] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:56:48,945 INFO [RS:0;jenkins-hbase20:39379] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:56:48,945 INFO [RS:0;jenkins-hbase20:39379] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:56:48,946 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,946 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:56:48,947 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,948 DEBUG [RS:0;jenkins-hbase20:39379] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:56:48,949 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,950 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,950 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,960 INFO [RS:0;jenkins-hbase20:39379] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:56:48,960 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39379,1686243408690-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:48,969 INFO [RS:0;jenkins-hbase20:39379] regionserver.Replication(203): jenkins-hbase20.apache.org,39379,1686243408690 started 2023-06-08 16:56:48,969 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,39379,1686243408690, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:39379, sessionid=0x101cba693250001 2023-06-08 16:56:48,970 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:56:48,970 DEBUG [RS:0;jenkins-hbase20:39379] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:48,970 DEBUG [RS:0;jenkins-hbase20:39379] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39379,1686243408690' 2023-06-08 16:56:48,970 DEBUG [RS:0;jenkins-hbase20:39379] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:56:48,970 DEBUG [RS:0;jenkins-hbase20:39379] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39379,1686243408690' 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:56:48,971 DEBUG [RS:0;jenkins-hbase20:39379] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:56:48,971 INFO [RS:0;jenkins-hbase20:39379] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:56:48,971 INFO [RS:0;jenkins-hbase20:39379] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:56:49,019 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:56:49,059 DEBUG [jenkins-hbase20:44053] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 16:56:49,061 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39379,1686243408690, state=OPENING 2023-06-08 16:56:49,063 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 16:56:49,064 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:49,066 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39379,1686243408690}] 2023-06-08 16:56:49,066 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:56:49,076 INFO [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39379%2C1686243408690, suffix=, logDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690, archiveDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/oldWALs, maxLogs=32 2023-06-08 16:56:49,087 INFO [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243409077 2023-06-08 16:56:49,087 DEBUG [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44395,DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83,DISK], DatanodeInfoWithStorage[127.0.0.1:35299,DS-5a04251b-2846-401f-926a-a7778ccbf5e2,DISK]] 2023-06-08 16:56:49,226 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:49,226 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:56:49,230 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41274, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:56:49,237 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 16:56:49,238 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:56:49,241 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39379%2C1686243408690.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690, archiveDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/oldWALs, maxLogs=32 2023-06-08 16:56:49,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.meta.1686243409241.meta 2023-06-08 16:56:49,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44395,DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83,DISK], DatanodeInfoWithStorage[127.0.0.1:35299,DS-5a04251b-2846-401f-926a-a7778ccbf5e2,DISK]] 2023-06-08 16:56:49,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:56:49,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 16:56:49,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 16:56:49,255 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 16:56:49,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 16:56:49,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:49,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 16:56:49,255 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 16:56:49,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:56:49,259 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/info 2023-06-08 16:56:49,259 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/info 2023-06-08 16:56:49,259 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:56:49,260 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:49,260 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:56:49,261 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:56:49,261 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:56:49,262 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:56:49,263 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:49,263 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:56:49,264 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/table 2023-06-08 16:56:49,264 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/table 2023-06-08 16:56:49,265 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:56:49,265 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:49,267 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740 2023-06-08 16:56:49,268 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740 2023-06-08 16:56:49,272 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:56:49,274 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:56:49,275 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=753273, jitterRate=-0.04216498136520386}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:56:49,275 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:56:49,277 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686243409226 2023-06-08 16:56:49,281 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 16:56:49,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 16:56:49,282 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,39379,1686243408690, state=OPEN 2023-06-08 16:56:49,284 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 16:56:49,284 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:56:49,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 16:56:49,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,39379,1686243408690 in 218 msec 2023-06-08 16:56:49,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 16:56:49,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 380 msec 2023-06-08 16:56:49,292 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 437 msec 2023-06-08 16:56:49,292 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686243409292, completionTime=-1 2023-06-08 16:56:49,292 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 16:56:49,292 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 16:56:49,295 DEBUG [hconnection-0xf195de4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:56:49,297 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:56:49,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 16:56:49,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686243469298 2023-06-08 16:56:49,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686243529298 2023-06-08 16:56:49,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-06-08 16:56:49,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44053,1686243408656-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:49,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44053,1686243408656-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:49,303 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44053,1686243408656-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:49,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44053, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:49,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 16:56:49,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 16:56:49,304 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:56:49,305 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 16:56:49,305 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 16:56:49,307 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:56:49,309 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:56:49,312 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,313 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415 empty. 2023-06-08 16:56:49,313 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,314 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 16:56:49,327 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 16:56:49,328 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1a56c5fd3cef3316bec993678c8d0415, NAME => 'hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp 2023-06-08 16:56:49,340 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:49,340 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1a56c5fd3cef3316bec993678c8d0415, disabling compactions & flushes 2023-06-08 16:56:49,340 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,340 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,340 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. after waiting 0 ms 2023-06-08 16:56:49,340 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,340 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,340 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1a56c5fd3cef3316bec993678c8d0415: 2023-06-08 16:56:49,343 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:56:49,344 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243409343"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243409343"}]},"ts":"1686243409343"} 2023-06-08 16:56:49,346 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:56:49,347 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:56:49,348 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243409347"}]},"ts":"1686243409347"} 2023-06-08 16:56:49,350 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 16:56:49,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1a56c5fd3cef3316bec993678c8d0415, ASSIGN}] 2023-06-08 16:56:49,356 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1a56c5fd3cef3316bec993678c8d0415, ASSIGN 2023-06-08 16:56:49,357 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1a56c5fd3cef3316bec993678c8d0415, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39379,1686243408690; forceNewPlan=false, retain=false 2023-06-08 16:56:49,508 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1a56c5fd3cef3316bec993678c8d0415, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:49,508 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243409508"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243409508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243409508"}]},"ts":"1686243409508"} 2023-06-08 16:56:49,510 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 1a56c5fd3cef3316bec993678c8d0415, server=jenkins-hbase20.apache.org,39379,1686243408690}] 2023-06-08 16:56:49,669 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1a56c5fd3cef3316bec993678c8d0415, NAME => 'hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:56:49,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:49,669 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,670 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,672 INFO [StoreOpener-1a56c5fd3cef3316bec993678c8d0415-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,674 DEBUG [StoreOpener-1a56c5fd3cef3316bec993678c8d0415-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/info 2023-06-08 16:56:49,675 DEBUG [StoreOpener-1a56c5fd3cef3316bec993678c8d0415-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/info 2023-06-08 16:56:49,675 INFO [StoreOpener-1a56c5fd3cef3316bec993678c8d0415-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a56c5fd3cef3316bec993678c8d0415 columnFamilyName info 2023-06-08 16:56:49,676 INFO [StoreOpener-1a56c5fd3cef3316bec993678c8d0415-1] regionserver.HStore(310): Store=1a56c5fd3cef3316bec993678c8d0415/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:49,678 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:56:49,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:56:49,689 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1a56c5fd3cef3316bec993678c8d0415; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=823677, jitterRate=0.04735986888408661}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:56:49,689 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1a56c5fd3cef3316bec993678c8d0415: 2023-06-08 16:56:49,692 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415., pid=6, masterSystemTime=1686243409663 2023-06-08 16:56:49,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,695 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:49,697 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1a56c5fd3cef3316bec993678c8d0415, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:49,697 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243409697"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243409697"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243409697"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243409697"}]},"ts":"1686243409697"} 2023-06-08 16:56:49,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 16:56:49,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 1a56c5fd3cef3316bec993678c8d0415, server=jenkins-hbase20.apache.org,39379,1686243408690 in 189 msec 2023-06-08 16:56:49,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 16:56:49,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1a56c5fd3cef3316bec993678c8d0415, ASSIGN in 349 msec 2023-06-08 16:56:49,706 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:56:49,706 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243409706"}]},"ts":"1686243409706"} 2023-06-08 16:56:49,708 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 16:56:49,709 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 16:56:49,710 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:56:49,710 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:56:49,710 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:49,712 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 406 msec 2023-06-08 16:56:49,714 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 16:56:49,731 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:56:49,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 20 msec 2023-06-08 16:56:49,746 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 16:56:49,757 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:56:49,761 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-06-08 16:56:49,774 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 16:56:49,776 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 16:56:49,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.046sec 2023-06-08 16:56:49,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 16:56:49,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 16:56:49,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 16:56:49,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44053,1686243408656-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 16:56:49,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44053,1686243408656-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 16:56:49,778 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 16:56:49,813 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ReadOnlyZKClient(139): Connect 0x5aae965e to 127.0.0.1:62635 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:56:49,816 DEBUG [Listener at localhost.localdomain/45823] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@850dd75, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:56:49,817 DEBUG [hconnection-0x2138f06c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:56:49,819 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:56:49,821 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:56:49,821 INFO [Listener at localhost.localdomain/45823] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:56:49,824 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 16:56:49,824 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:56:49,825 INFO [Listener at localhost.localdomain/45823] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 16:56:49,827 DEBUG [Listener at localhost.localdomain/45823] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 16:56:49,829 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 16:56:49,831 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 16:56:49,831 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 16:56:49,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:56:49,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:56:49,834 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:56:49,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-06-08 16:56:49,835 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:56:49,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:56:49,837 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:49,837 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92 empty. 2023-06-08 16:56:49,838 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:49,838 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-06-08 16:56:49,851 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-06-08 16:56:49,853 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => fd35c481d37d36fe421704684cd81d92, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/.tmp 2023-06-08 16:56:49,860 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:49,860 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing fd35c481d37d36fe421704684cd81d92, disabling compactions & flushes 2023-06-08 16:56:49,860 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:49,860 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:49,860 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. after waiting 0 ms 2023-06-08 16:56:49,860 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:49,860 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:49,860 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:56:49,863 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:56:49,864 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686243409864"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243409864"}]},"ts":"1686243409864"} 2023-06-08 16:56:49,865 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:56:49,866 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:56:49,866 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243409866"}]},"ts":"1686243409866"} 2023-06-08 16:56:49,868 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-06-08 16:56:49,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=fd35c481d37d36fe421704684cd81d92, ASSIGN}] 2023-06-08 16:56:49,873 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=fd35c481d37d36fe421704684cd81d92, ASSIGN 2023-06-08 16:56:49,874 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=fd35c481d37d36fe421704684cd81d92, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39379,1686243408690; forceNewPlan=false, retain=false 2023-06-08 16:56:50,025 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=fd35c481d37d36fe421704684cd81d92, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:50,025 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686243410025"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243410025"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243410025"}]},"ts":"1686243410025"} 2023-06-08 16:56:50,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure fd35c481d37d36fe421704684cd81d92, server=jenkins-hbase20.apache.org,39379,1686243408690}] 2023-06-08 16:56:50,183 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:50,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd35c481d37d36fe421704684cd81d92, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:56:50,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:56:50,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,185 INFO [StoreOpener-fd35c481d37d36fe421704684cd81d92-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,187 DEBUG [StoreOpener-fd35c481d37d36fe421704684cd81d92-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info 2023-06-08 16:56:50,187 DEBUG [StoreOpener-fd35c481d37d36fe421704684cd81d92-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info 2023-06-08 16:56:50,187 INFO [StoreOpener-fd35c481d37d36fe421704684cd81d92-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd35c481d37d36fe421704684cd81d92 columnFamilyName info 2023-06-08 16:56:50,188 INFO [StoreOpener-fd35c481d37d36fe421704684cd81d92-1] regionserver.HStore(310): Store=fd35c481d37d36fe421704684cd81d92/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:56:50,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for fd35c481d37d36fe421704684cd81d92 2023-06-08 16:56:50,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:56:50,194 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened fd35c481d37d36fe421704684cd81d92; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=839512, jitterRate=0.06749525666236877}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:56:50,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:56:50,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92., pid=11, masterSystemTime=1686243410180 2023-06-08 16:56:50,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:50,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:50,197 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=fd35c481d37d36fe421704684cd81d92, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:50,197 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1686243410197"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243410197"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243410197"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243410197"}]},"ts":"1686243410197"} 2023-06-08 16:56:50,201 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 16:56:50,201 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure fd35c481d37d36fe421704684cd81d92, server=jenkins-hbase20.apache.org,39379,1686243408690 in 171 msec 2023-06-08 16:56:50,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 16:56:50,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=fd35c481d37d36fe421704684cd81d92, ASSIGN in 330 msec 2023-06-08 16:56:50,205 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:56:50,205 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243410205"}]},"ts":"1686243410205"} 2023-06-08 16:56:50,207 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-06-08 16:56:50,209 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:56:50,212 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 379 msec 2023-06-08 16:56:54,807 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 16:56:54,943 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:56:59,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:56:59,839 INFO [Listener at localhost.localdomain/45823] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-06-08 16:56:59,844 DEBUG [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:56:59,845 DEBUG [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:56:59,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-08 16:56:59,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-06-08 16:56:59,866 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-06-08 16:56:59,866 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:56:59,866 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-06-08 16:56:59,867 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-06-08 16:56:59,867 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,867 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 16:56:59,868 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:56:59,868 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,869 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:56:59,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:56:59,869 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,869 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 16:56:59,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 16:56:59,869 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,870 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 16:56:59,870 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 16:56:59,870 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-06-08 16:56:59,872 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-06-08 16:56:59,872 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-06-08 16:56:59,872 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:56:59,873 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-06-08 16:56:59,873 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 16:56:59,873 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 16:56:59,873 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:59,874 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. started... 2023-06-08 16:56:59,874 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 1a56c5fd3cef3316bec993678c8d0415 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 16:56:59,886 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/.tmp/info/ccc49133936b4054ac0aae197954952b 2023-06-08 16:56:59,895 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/.tmp/info/ccc49133936b4054ac0aae197954952b as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/info/ccc49133936b4054ac0aae197954952b 2023-06-08 16:56:59,901 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/info/ccc49133936b4054ac0aae197954952b, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 16:56:59,902 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 1a56c5fd3cef3316bec993678c8d0415 in 28ms, sequenceid=6, compaction requested=false 2023-06-08 16:56:59,902 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 1a56c5fd3cef3316bec993678c8d0415: 2023-06-08 16:56:59,902 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:56:59,902 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 16:56:59,902 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 16:56:59,903 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,903 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-06-08 16:56:59,903 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure (hbase:namespace) in zk 2023-06-08 16:56:59,904 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,904 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,905 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,905 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:56:59,905 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:56:59,905 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,905 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 16:56:59,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:56:59,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:56:59,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 16:56:59,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:56:59,907 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-06-08 16:56:59,907 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-06-08 16:56:59,907 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6526ea10[Count = 0] remaining members to acquire global barrier 2023-06-08 16:56:59,907 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,908 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,909 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,909 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,909 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-06-08 16:56:59,909 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,909 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 16:56:59,909 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-06-08 16:56:59,909 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase20.apache.org,39379,1686243408690' in zk 2023-06-08 16:56:59,910 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,910 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-06-08 16:56:59,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:56:59,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:56:59,911 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:56:59,911 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-06-08 16:56:59,911 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:56:59,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:56:59,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 16:56:59,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,912 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:56:59,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 16:56:59,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase20.apache.org,39379,1686243408690': 2023-06-08 16:56:59,914 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-06-08 16:56:59,914 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 16:56:59,914 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-06-08 16:56:59,914 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 16:56:59,914 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-06-08 16:56:59,914 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 16:56:59,922 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,922 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:56:59,922 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,922 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,922 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:56:59,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:56:59,923 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:56:59,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:56:59,923 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:56:59,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 16:56:59,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:56:59,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 16:56:59,924 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:56:59,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-06-08 16:56:59,925 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,930 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,930 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:56:59,931 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 16:56:59,931 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:56:59,931 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:56:59,931 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:56:59,931 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-06-08 16:56:59,931 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:56:59,931 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:56:59,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-06-08 16:56:59,931 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 16:56:59,932 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-06-08 16:56:59,932 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:56:59,932 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-06-08 16:56:59,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:56:59,934 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-06-08 16:56:59,934 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 16:57:09,934 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 16:57:09,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 16:57:09,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-08 16:57:09,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:09,957 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:09,957 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:09,958 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 16:57:09,958 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 16:57:09,959 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:09,959 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,009 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:10,009 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,009 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:10,010 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:10,010 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,010 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 16:57:10,010 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,011 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 16:57:10,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,012 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,013 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,013 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 16:57:10,013 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:10,014 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 16:57:10,015 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 16:57:10,015 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 16:57:10,015 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:10,015 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. started... 2023-06-08 16:57:10,016 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing fd35c481d37d36fe421704684cd81d92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 16:57:10,036 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/70162c5ce6ab4ce782fafaa7828d51b2 2023-06-08 16:57:10,045 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/70162c5ce6ab4ce782fafaa7828d51b2 as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/70162c5ce6ab4ce782fafaa7828d51b2 2023-06-08 16:57:10,050 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/70162c5ce6ab4ce782fafaa7828d51b2, entries=1, sequenceid=5, filesize=5.8 K 2023-06-08 16:57:10,051 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for fd35c481d37d36fe421704684cd81d92 in 35ms, sequenceid=5, compaction requested=false 2023-06-08 16:57:10,052 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:57:10,052 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:10,052 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 16:57:10,052 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 16:57:10,052 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,052 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 16:57:10,052 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 16:57:10,128 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,128 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,128 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:10,129 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:10,129 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,129 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 16:57:10,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:10,131 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:10,132 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,133 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,134 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:10,135 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 16:57:10,135 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6119726c[Count = 0] remaining members to acquire global barrier 2023-06-08 16:57:10,135 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 16:57:10,135 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,140 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,140 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,140 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,140 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 16:57:10,140 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 16:57:10,140 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39379,1686243408690' in zk 2023-06-08 16:57:10,140 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,141 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 16:57:10,143 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 16:57:10,143 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:10,143 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 16:57:10,143 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:10,144 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:10,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:10,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:10,146 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,147 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,148 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:10,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39379,1686243408690': 2023-06-08 16:57:10,151 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 16:57:10,151 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 16:57:10,152 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 16:57:10,152 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 16:57:10,152 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,152 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 16:57:10,157 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,157 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,158 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,158 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:10,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:10,158 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:10,159 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,159 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:10,159 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:10,159 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:10,160 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,160 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,160 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:10,161 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,161 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,162 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,162 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:10,163 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,164 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:10,167 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:10,167 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,167 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 16:57:10,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:10,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 16:57:10,167 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:10,167 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:10,167 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:10,167 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:10,168 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 16:57:10,168 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 16:57:20,168 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 16:57:20,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 16:57:20,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-08 16:57:20,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-08 16:57:20,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,183 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:20,183 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:20,184 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 16:57:20,184 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 16:57:20,184 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,184 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,351 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:20,352 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:20,352 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:20,352 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,353 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,353 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,353 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 16:57:20,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,355 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 16:57:20,355 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,355 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,356 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-08 16:57:20,356 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,356 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 16:57:20,356 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:20,357 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 16:57:20,358 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 16:57:20,358 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 16:57:20,358 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:20,358 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. started... 2023-06-08 16:57:20,359 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing fd35c481d37d36fe421704684cd81d92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 16:57:20,374 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/fdd1f243ef584d009490ba60fb338643 2023-06-08 16:57:20,385 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/fdd1f243ef584d009490ba60fb338643 as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/fdd1f243ef584d009490ba60fb338643 2023-06-08 16:57:20,392 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/fdd1f243ef584d009490ba60fb338643, entries=1, sequenceid=9, filesize=5.8 K 2023-06-08 16:57:20,393 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for fd35c481d37d36fe421704684cd81d92 in 34ms, sequenceid=9, compaction requested=false 2023-06-08 16:57:20,393 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:57:20,393 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:20,393 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 16:57:20,394 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 16:57:20,394 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,394 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 16:57:20,394 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 16:57:20,395 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,395 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:20,395 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:20,396 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,396 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 16:57:20,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:20,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:20,396 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,397 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:20,397 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 16:57:20,397 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@241be02b[Count = 0] remaining members to acquire global barrier 2023-06-08 16:57:20,397 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 16:57:20,397 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,398 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,398 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,398 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,398 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 16:57:20,398 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 16:57:20,398 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39379,1686243408690' in zk 2023-06-08 16:57:20,399 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,399 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 16:57:20,400 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 16:57:20,400 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,400 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:20,400 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,400 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:20,400 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:20,400 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 16:57:20,401 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:20,401 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:20,401 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:20,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39379,1686243408690': 2023-06-08 16:57:20,403 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 16:57:20,403 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 16:57:20,403 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 16:57:20,403 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 16:57:20,403 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,403 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 16:57:20,404 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,404 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:20,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:20,404 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,404 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:20,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:20,405 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,405 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:20,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:20,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,405 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:20,406 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,407 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,407 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:20,408 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,409 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,410 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,410 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:20,410 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,410 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:20,410 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:20,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:20,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 16:57:20,410 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:20,410 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,410 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:20,411 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:20,411 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 16:57:20,411 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 16:57:20,411 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,412 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:20,412 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 16:57:20,412 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:20,412 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:20,412 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,412 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 16:57:30,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 16:57:30,435 INFO [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243409077 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243450420 2023-06-08 16:57:30,435 DEBUG [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35299,DS-5a04251b-2846-401f-926a-a7778ccbf5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:44395,DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83,DISK]] 2023-06-08 16:57:30,435 DEBUG [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243409077 is not closed yet, will try archiving it next time 2023-06-08 16:57:30,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-08 16:57:30,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-08 16:57:30,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,446 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:30,446 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:30,447 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 16:57:30,447 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 16:57:30,448 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,448 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,449 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:30,449 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:30,449 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,449 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:30,450 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,450 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 16:57:30,450 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,450 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,451 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 16:57:30,451 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,451 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,451 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-08 16:57:30,451 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,451 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 16:57:30,451 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:30,452 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 16:57:30,452 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 16:57:30,452 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 16:57:30,452 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:30,452 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. started... 2023-06-08 16:57:30,453 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing fd35c481d37d36fe421704684cd81d92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 16:57:30,466 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/d6e0c69175df4eb0ac55dd8f8783662d 2023-06-08 16:57:30,476 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/d6e0c69175df4eb0ac55dd8f8783662d as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/d6e0c69175df4eb0ac55dd8f8783662d 2023-06-08 16:57:30,483 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/d6e0c69175df4eb0ac55dd8f8783662d, entries=1, sequenceid=13, filesize=5.8 K 2023-06-08 16:57:30,484 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for fd35c481d37d36fe421704684cd81d92 in 31ms, sequenceid=13, compaction requested=true 2023-06-08 16:57:30,485 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:57:30,485 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:30,485 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 16:57:30,485 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 16:57:30,485 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,485 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 16:57:30,485 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 16:57:30,487 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,487 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,487 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,487 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:30,487 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:30,487 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,487 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 16:57:30,487 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:30,488 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:30,488 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,488 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,488 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:30,489 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 16:57:30,489 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4751244d[Count = 0] remaining members to acquire global barrier 2023-06-08 16:57:30,489 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 16:57:30,489 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,489 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,490 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,490 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,490 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 16:57:30,490 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 16:57:30,490 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39379,1686243408690' in zk 2023-06-08 16:57:30,490 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,490 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 16:57:30,491 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 16:57:30,491 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,491 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:30,491 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 16:57:30,491 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,491 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:30,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:30,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:30,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:30,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:30,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,494 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,494 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39379,1686243408690': 2023-06-08 16:57:30,494 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 16:57:30,494 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 16:57:30,494 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 16:57:30,494 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 16:57:30,494 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,494 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 16:57:30,495 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,495 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,495 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,495 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:30,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,495 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:30,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:30,496 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:30,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:30,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:30,496 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,496 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,498 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:30,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,499 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:30,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,504 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,504 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:30,504 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:30,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 16:57:30,504 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,504 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:30,504 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:30,504 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:30,504 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:30,504 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:30,505 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 16:57:30,505 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,505 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 16:57:30,505 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:30,505 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,505 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:30,505 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:40,506 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 16:57:40,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 16:57:40,510 DEBUG [Listener at localhost.localdomain/45823] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:57:40,520 DEBUG [Listener at localhost.localdomain/45823] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:57:40,520 DEBUG [Listener at localhost.localdomain/45823] regionserver.HStore(1912): fd35c481d37d36fe421704684cd81d92/info is initiating minor compaction (all files) 2023-06-08 16:57:40,521 INFO [Listener at localhost.localdomain/45823] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:57:40,521 INFO [Listener at localhost.localdomain/45823] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:40,521 INFO [Listener at localhost.localdomain/45823] regionserver.HRegion(2259): Starting compaction of fd35c481d37d36fe421704684cd81d92/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:40,522 INFO [Listener at localhost.localdomain/45823] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/70162c5ce6ab4ce782fafaa7828d51b2, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/fdd1f243ef584d009490ba60fb338643, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/d6e0c69175df4eb0ac55dd8f8783662d] into tmpdir=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp, totalSize=17.4 K 2023-06-08 16:57:40,523 DEBUG [Listener at localhost.localdomain/45823] compactions.Compactor(207): Compacting 70162c5ce6ab4ce782fafaa7828d51b2, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1686243429950 2023-06-08 16:57:40,524 DEBUG [Listener at localhost.localdomain/45823] compactions.Compactor(207): Compacting fdd1f243ef584d009490ba60fb338643, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1686243440171 2023-06-08 16:57:40,525 DEBUG [Listener at localhost.localdomain/45823] compactions.Compactor(207): Compacting d6e0c69175df4eb0ac55dd8f8783662d, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1686243450417 2023-06-08 16:57:40,538 INFO [Listener at localhost.localdomain/45823] throttle.PressureAwareThroughputController(145): fd35c481d37d36fe421704684cd81d92#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:57:40,552 DEBUG [Listener at localhost.localdomain/45823] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/64e727fc0a8b43569486768934f3699d as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/64e727fc0a8b43569486768934f3699d 2023-06-08 16:57:40,559 INFO [Listener at localhost.localdomain/45823] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fd35c481d37d36fe421704684cd81d92/info of fd35c481d37d36fe421704684cd81d92 into 64e727fc0a8b43569486768934f3699d(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:57:40,559 DEBUG [Listener at localhost.localdomain/45823] regionserver.HRegion(2289): Compaction status journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:57:40,569 INFO [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243450420 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243460560 2023-06-08 16:57:40,569 DEBUG [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35299,DS-5a04251b-2846-401f-926a-a7778ccbf5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:44395,DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83,DISK]] 2023-06-08 16:57:40,569 DEBUG [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243450420 is not closed yet, will try archiving it next time 2023-06-08 16:57:40,570 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243409077 to hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/oldWALs/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243409077 2023-06-08 16:57:40,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-06-08 16:57:40,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-06-08 16:57:40,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,578 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:40,578 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:40,579 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-06-08 16:57:40,579 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-06-08 16:57:40,579 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,579 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,581 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,581 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:40,581 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:40,581 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:40,582 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,582 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-06-08 16:57:40,582 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,582 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,583 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-06-08 16:57:40,583 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,583 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,583 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-06-08 16:57:40,583 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,583 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-06-08 16:57:40,583 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-06-08 16:57:40,584 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-06-08 16:57:40,586 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-06-08 16:57:40,586 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-06-08 16:57:40,586 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:40,586 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. started... 2023-06-08 16:57:40,586 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing fd35c481d37d36fe421704684cd81d92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 16:57:40,600 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/f22858ac340a4c7185b158cce50aa6fe 2023-06-08 16:57:40,606 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/f22858ac340a4c7185b158cce50aa6fe as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/f22858ac340a4c7185b158cce50aa6fe 2023-06-08 16:57:40,611 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/f22858ac340a4c7185b158cce50aa6fe, entries=1, sequenceid=18, filesize=5.8 K 2023-06-08 16:57:40,612 INFO [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for fd35c481d37d36fe421704684cd81d92 in 26ms, sequenceid=18, compaction requested=false 2023-06-08 16:57:40,612 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:57:40,612 DEBUG [rs(jenkins-hbase20.apache.org,39379,1686243408690)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:40,612 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-06-08 16:57:40,612 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-06-08 16:57:40,612 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,612 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-06-08 16:57:40,612 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-06-08 16:57:40,614 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,614 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:40,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:40,614 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,614 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-06-08 16:57:40,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:40,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:40,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:40,616 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,39379,1686243408690' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-06-08 16:57:40,616 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@15f520dd[Count = 0] remaining members to acquire global barrier 2023-06-08 16:57:40,616 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-06-08 16:57:40,616 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,616 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,616 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,616 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,616 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-06-08 16:57:40,616 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,617 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-06-08 16:57:40,617 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-06-08 16:57:40,617 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,39379,1686243408690' in zk 2023-06-08 16:57:40,618 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,618 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-06-08 16:57:40,618 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,618 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:40,618 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:40,618 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:40,618 DEBUG [member: 'jenkins-hbase20.apache.org,39379,1686243408690' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-06-08 16:57:40,619 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:40,619 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:40,620 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,620 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,620 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:40,621 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,621 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,622 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,39379,1686243408690': 2023-06-08 16:57:40,622 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,39379,1686243408690' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-06-08 16:57:40,622 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-06-08 16:57:40,622 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-06-08 16:57:40,622 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-06-08 16:57:40,622 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,622 INFO [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-06-08 16:57:40,639 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,639 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,639 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-06-08 16:57:40,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-06-08 16:57:40,639 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:40,639 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:40,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:40,640 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-06-08 16:57:40,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-06-08 16:57:40,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,640 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-06-08 16:57:40,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,647 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,647 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-06-08 16:57:40,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-06-08 16:57:40,647 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-06-08 16:57:40,647 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:40,647 DEBUG [(jenkins-hbase20.apache.org,44053,1686243408656)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-06-08 16:57:40,647 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,647 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-06-08 16:57:40,647 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-06-08 16:57:40,647 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:40,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-06-08 16:57:40,648 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,648 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-06-08 16:57:40,648 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:40,648 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-06-08 16:57:40,648 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:40,648 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-06-08 16:57:40,648 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-06-08 16:57:50,649 DEBUG [Listener at localhost.localdomain/45823] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-06-08 16:57:50,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44053] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-06-08 16:57:50,666 INFO [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243460560 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243470654 2023-06-08 16:57:50,666 DEBUG [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35299,DS-5a04251b-2846-401f-926a-a7778ccbf5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:44395,DS-15eaefe7-33f0-4c2d-8f1d-71840653ce83,DISK]] 2023-06-08 16:57:50,666 DEBUG [Listener at localhost.localdomain/45823] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243460560 is not closed yet, will try archiving it next time 2023-06-08 16:57:50,666 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 16:57:50,666 INFO [Listener at localhost.localdomain/45823] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 16:57:50,666 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243450420 to hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/oldWALs/jenkins-hbase20.apache.org%2C39379%2C1686243408690.1686243450420 2023-06-08 16:57:50,666 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5aae965e to 127.0.0.1:62635 2023-06-08 16:57:50,667 DEBUG [Listener at localhost.localdomain/45823] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:57:50,669 DEBUG [Listener at localhost.localdomain/45823] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 16:57:50,669 DEBUG [Listener at localhost.localdomain/45823] util.JVMClusterUtil(257): Found active master hash=1540356171, stopped=false 2023-06-08 16:57:50,670 INFO [Listener at localhost.localdomain/45823] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:57:50,672 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:57:50,673 INFO [Listener at localhost.localdomain/45823] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 16:57:50,673 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:57:50,673 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:50,673 DEBUG [Listener at localhost.localdomain/45823] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x002a30d0 to 127.0.0.1:62635 2023-06-08 16:57:50,674 DEBUG [Listener at localhost.localdomain/45823] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:57:50,674 INFO [Listener at localhost.localdomain/45823] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,39379,1686243408690' ***** 2023-06-08 16:57:50,674 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:57:50,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:57:50,674 INFO [Listener at localhost.localdomain/45823] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:57:50,675 INFO [RS:0;jenkins-hbase20:39379] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:57:50,675 INFO [RS:0;jenkins-hbase20:39379] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:57:50,675 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:57:50,675 INFO [RS:0;jenkins-hbase20:39379] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:57:50,676 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(3303): Received CLOSE for 1a56c5fd3cef3316bec993678c8d0415 2023-06-08 16:57:50,677 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(3303): Received CLOSE for fd35c481d37d36fe421704684cd81d92 2023-06-08 16:57:50,677 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:50,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1a56c5fd3cef3316bec993678c8d0415, disabling compactions & flushes 2023-06-08 16:57:50,677 DEBUG [RS:0;jenkins-hbase20:39379] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3cf1956f to 127.0.0.1:62635 2023-06-08 16:57:50,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:57:50,677 DEBUG [RS:0;jenkins-hbase20:39379] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:57:50,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:57:50,677 INFO [RS:0;jenkins-hbase20:39379] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:57:50,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. after waiting 0 ms 2023-06-08 16:57:50,677 INFO [RS:0;jenkins-hbase20:39379] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:57:50,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:57:50,677 INFO [RS:0;jenkins-hbase20:39379] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:57:50,677 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:57:50,678 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-06-08 16:57:50,678 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1478): Online Regions={1a56c5fd3cef3316bec993678c8d0415=hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415., fd35c481d37d36fe421704684cd81d92=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92., 1588230740=hbase:meta,,1.1588230740} 2023-06-08 16:57:50,678 DEBUG [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1504): Waiting on 1588230740, 1a56c5fd3cef3316bec993678c8d0415, fd35c481d37d36fe421704684cd81d92 2023-06-08 16:57:50,680 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:57:50,681 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:57:50,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:57:50,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:57:50,681 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:57:50,681 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-06-08 16:57:50,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/namespace/1a56c5fd3cef3316bec993678c8d0415/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 16:57:50,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:57:50,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1a56c5fd3cef3316bec993678c8d0415: 2023-06-08 16:57:50,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686243409304.1a56c5fd3cef3316bec993678c8d0415. 2023-06-08 16:57:50,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing fd35c481d37d36fe421704684cd81d92, disabling compactions & flushes 2023-06-08 16:57:50,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:50,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:50,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. after waiting 0 ms 2023-06-08 16:57:50,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:50,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing fd35c481d37d36fe421704684cd81d92 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 16:57:50,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/.tmp/info/becae1982f59454280f432b02dfe41e7 2023-06-08 16:57:50,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/32e30ddc3122431dbc3271c48724b481 2023-06-08 16:57:50,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/.tmp/info/32e30ddc3122431dbc3271c48724b481 as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/32e30ddc3122431dbc3271c48724b481 2023-06-08 16:57:50,719 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/32e30ddc3122431dbc3271c48724b481, entries=1, sequenceid=22, filesize=5.8 K 2023-06-08 16:57:50,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for fd35c481d37d36fe421704684cd81d92 in 29ms, sequenceid=22, compaction requested=true 2023-06-08 16:57:50,723 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/70162c5ce6ab4ce782fafaa7828d51b2, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/fdd1f243ef584d009490ba60fb338643, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/d6e0c69175df4eb0ac55dd8f8783662d] to archive 2023-06-08 16:57:50,724 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 16:57:50,728 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/70162c5ce6ab4ce782fafaa7828d51b2 to hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/70162c5ce6ab4ce782fafaa7828d51b2 2023-06-08 16:57:50,728 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/.tmp/table/0a6bef13b1cd4c64851ae118548a17d1 2023-06-08 16:57:50,729 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/fdd1f243ef584d009490ba60fb338643 to hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/fdd1f243ef584d009490ba60fb338643 2023-06-08 16:57:50,731 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/d6e0c69175df4eb0ac55dd8f8783662d to hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/info/d6e0c69175df4eb0ac55dd8f8783662d 2023-06-08 16:57:50,741 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/.tmp/info/becae1982f59454280f432b02dfe41e7 as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/info/becae1982f59454280f432b02dfe41e7 2023-06-08 16:57:50,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/fd35c481d37d36fe421704684cd81d92/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-06-08 16:57:50,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:50,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for fd35c481d37d36fe421704684cd81d92: 2023-06-08 16:57:50,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1686243409831.fd35c481d37d36fe421704684cd81d92. 2023-06-08 16:57:50,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/info/becae1982f59454280f432b02dfe41e7, entries=20, sequenceid=14, filesize=7.6 K 2023-06-08 16:57:50,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/.tmp/table/0a6bef13b1cd4c64851ae118548a17d1 as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/table/0a6bef13b1cd4c64851ae118548a17d1 2023-06-08 16:57:50,754 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/table/0a6bef13b1cd4c64851ae118548a17d1, entries=4, sequenceid=14, filesize=4.9 K 2023-06-08 16:57:50,755 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 74ms, sequenceid=14, compaction requested=false 2023-06-08 16:57:50,764 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-06-08 16:57:50,765 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 16:57:50,765 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:57:50,765 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:57:50,765 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 16:57:50,878 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39379,1686243408690; all regions closed. 2023-06-08 16:57:50,879 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:50,893 DEBUG [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/oldWALs 2023-06-08 16:57:50,893 INFO [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C39379%2C1686243408690.meta:.meta(num 1686243409241) 2023-06-08 16:57:50,893 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/WALs/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:50,898 DEBUG [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/oldWALs 2023-06-08 16:57:50,898 INFO [RS:0;jenkins-hbase20:39379] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C39379%2C1686243408690:(num 1686243470654) 2023-06-08 16:57:50,898 DEBUG [RS:0;jenkins-hbase20:39379] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:57:50,898 INFO [RS:0;jenkins-hbase20:39379] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:57:50,898 INFO [RS:0;jenkins-hbase20:39379] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-08 16:57:50,898 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:57:50,899 INFO [RS:0;jenkins-hbase20:39379] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39379 2023-06-08 16:57:50,902 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,39379,1686243408690 2023-06-08 16:57:50,902 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:57:50,903 ERROR [Listener at localhost.localdomain/45823-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@7a8ca84e rejected from java.util.concurrent.ThreadPoolExecutor@7eb323c7[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 34] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-06-08 16:57:50,903 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:57:50,904 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,39379,1686243408690] 2023-06-08 16:57:50,904 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,39379,1686243408690; numProcessing=1 2023-06-08 16:57:50,904 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,39379,1686243408690 already deleted, retry=false 2023-06-08 16:57:50,905 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,39379,1686243408690 expired; onlineServers=0 2023-06-08 16:57:50,905 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44053,1686243408656' ***** 2023-06-08 16:57:50,905 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 16:57:50,905 DEBUG [M:0;jenkins-hbase20:44053] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ff067ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:57:50,905 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:57:50,905 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44053,1686243408656; all regions closed. 2023-06-08 16:57:50,905 DEBUG [M:0;jenkins-hbase20:44053] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:57:50,905 DEBUG [M:0;jenkins-hbase20:44053] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 16:57:50,906 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 16:57:50,906 DEBUG [M:0;jenkins-hbase20:44053] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 16:57:50,906 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243408863] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243408863,5,FailOnTimeoutGroup] 2023-06-08 16:57:50,906 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243408864] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243408864,5,FailOnTimeoutGroup] 2023-06-08 16:57:50,906 INFO [M:0;jenkins-hbase20:44053] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 16:57:50,907 INFO [M:0;jenkins-hbase20:44053] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 16:57:50,907 INFO [M:0;jenkins-hbase20:44053] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-08 16:57:50,907 DEBUG [M:0;jenkins-hbase20:44053] master.HMaster(1512): Stopping service threads 2023-06-08 16:57:50,907 INFO [M:0;jenkins-hbase20:44053] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 16:57:50,908 ERROR [M:0;jenkins-hbase20:44053] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 16:57:50,908 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 16:57:50,908 INFO [M:0;jenkins-hbase20:44053] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 16:57:50,908 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:50,908 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 16:57:50,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:57:50,909 DEBUG [M:0;jenkins-hbase20:44053] zookeeper.ZKUtil(398): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 16:57:50,909 WARN [M:0;jenkins-hbase20:44053] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 16:57:50,909 INFO [M:0;jenkins-hbase20:44053] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 16:57:50,909 INFO [M:0;jenkins-hbase20:44053] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 16:57:50,910 DEBUG [M:0;jenkins-hbase20:44053] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:57:50,910 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:50,910 DEBUG [M:0;jenkins-hbase20:44053] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:50,910 DEBUG [M:0;jenkins-hbase20:44053] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:57:50,910 DEBUG [M:0;jenkins-hbase20:44053] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:50,910 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.94 KB heapSize=47.38 KB 2023-06-08 16:57:50,923 INFO [M:0;jenkins-hbase20:44053] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.94 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2c826c04d2fc484c94ae30c4e6218053 2023-06-08 16:57:50,930 INFO [M:0;jenkins-hbase20:44053] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2c826c04d2fc484c94ae30c4e6218053 2023-06-08 16:57:50,931 DEBUG [M:0;jenkins-hbase20:44053] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2c826c04d2fc484c94ae30c4e6218053 as hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2c826c04d2fc484c94ae30c4e6218053 2023-06-08 16:57:50,936 INFO [M:0;jenkins-hbase20:44053] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2c826c04d2fc484c94ae30c4e6218053 2023-06-08 16:57:50,936 INFO [M:0;jenkins-hbase20:44053] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38111/user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2c826c04d2fc484c94ae30c4e6218053, entries=11, sequenceid=100, filesize=6.1 K 2023-06-08 16:57:50,937 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegion(2948): Finished flush of dataSize ~38.94 KB/39878, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=100, compaction requested=false 2023-06-08 16:57:50,939 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:50,939 DEBUG [M:0;jenkins-hbase20:44053] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:57:50,939 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/215398da-21b3-358e-5157-65fa2dd5b6c2/MasterData/WALs/jenkins-hbase20.apache.org,44053,1686243408656 2023-06-08 16:57:50,942 INFO [M:0;jenkins-hbase20:44053] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 16:57:50,942 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:57:50,943 INFO [M:0;jenkins-hbase20:44053] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44053 2023-06-08 16:57:50,944 DEBUG [M:0;jenkins-hbase20:44053] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44053,1686243408656 already deleted, retry=false 2023-06-08 16:57:50,954 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:57:51,004 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:57:51,004 INFO [RS:0;jenkins-hbase20:39379] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39379,1686243408690; zookeeper connection closed. 2023-06-08 16:57:51,004 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): regionserver:39379-0x101cba693250001, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:57:51,004 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6a228d52] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6a228d52 2023-06-08 16:57:51,004 INFO [Listener at localhost.localdomain/45823] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 16:57:51,104 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:57:51,104 DEBUG [Listener at localhost.localdomain/45823-EventThread] zookeeper.ZKWatcher(600): master:44053-0x101cba693250000, quorum=127.0.0.1:62635, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:57:51,104 INFO [M:0;jenkins-hbase20:44053] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44053,1686243408656; zookeeper connection closed. 2023-06-08 16:57:51,106 WARN [Listener at localhost.localdomain/45823] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:57:51,113 INFO [Listener at localhost.localdomain/45823] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:57:51,226 WARN [BP-1651038660-148.251.75.209-1686243408203 heartbeating to localhost.localdomain/127.0.0.1:38111] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:57:51,226 WARN [BP-1651038660-148.251.75.209-1686243408203 heartbeating to localhost.localdomain/127.0.0.1:38111] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651038660-148.251.75.209-1686243408203 (Datanode Uuid f17876ac-b581-4479-8a5b-557a2aed8db5) service to localhost.localdomain/127.0.0.1:38111 2023-06-08 16:57:51,227 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/dfs/data/data3/current/BP-1651038660-148.251.75.209-1686243408203] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:57:51,228 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/dfs/data/data4/current/BP-1651038660-148.251.75.209-1686243408203] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:57:51,230 WARN [Listener at localhost.localdomain/45823] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:57:51,233 INFO [Listener at localhost.localdomain/45823] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:57:51,344 WARN [BP-1651038660-148.251.75.209-1686243408203 heartbeating to localhost.localdomain/127.0.0.1:38111] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:57:51,344 WARN [BP-1651038660-148.251.75.209-1686243408203 heartbeating to localhost.localdomain/127.0.0.1:38111] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1651038660-148.251.75.209-1686243408203 (Datanode Uuid 9e54f55c-5189-4ba8-9577-6564ad93a8b9) service to localhost.localdomain/127.0.0.1:38111 2023-06-08 16:57:51,346 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/dfs/data/data1/current/BP-1651038660-148.251.75.209-1686243408203] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:57:51,346 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/cluster_354a9fa2-4437-cd37-4d0d-722cf68a5a06/dfs/data/data2/current/BP-1651038660-148.251.75.209-1686243408203] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:57:51,362 INFO [Listener at localhost.localdomain/45823] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 16:57:51,481 INFO [Listener at localhost.localdomain/45823] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 16:57:51,500 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 16:57:51,510 INFO [Listener at localhost.localdomain/45823] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=94 (was 88) - Thread LEAK? -, OpenFileDescriptor=498 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=30 (was 25) - SystemLoadAverage LEAK? -, ProcessCount=184 (was 189), AvailableMemoryMB=1915 (was 1946) 2023-06-08 16:57:51,520 INFO [Listener at localhost.localdomain/45823] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=95, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=30, ProcessCount=184, AvailableMemoryMB=1915 2023-06-08 16:57:51,520 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 16:57:51,520 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/hadoop.log.dir so I do NOT create it in target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82 2023-06-08 16:57:51,520 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1345e0f8-ca2d-324a-1f2c-680ec10e0046/hadoop.tmp.dir so I do NOT create it in target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82 2023-06-08 16:57:51,520 INFO [Listener at localhost.localdomain/45823] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a, deleteOnExit=true 2023-06-08 16:57:51,520 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/test.cache.data in system properties and HBase conf 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/hadoop.log.dir in system properties and HBase conf 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 16:57:51,521 DEBUG [Listener at localhost.localdomain/45823] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:57:51,521 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/nfs.dump.dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/java.io.tmpdir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:57:51,522 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 16:57:51,523 INFO [Listener at localhost.localdomain/45823] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 16:57:51,524 WARN [Listener at localhost.localdomain/45823] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:57:51,525 WARN [Listener at localhost.localdomain/45823] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:57:51,525 WARN [Listener at localhost.localdomain/45823] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:57:51,547 WARN [Listener at localhost.localdomain/45823] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:57:51,549 INFO [Listener at localhost.localdomain/45823] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:57:51,554 INFO [Listener at localhost.localdomain/45823] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/java.io.tmpdir/Jetty_localhost_localdomain_33813_hdfs____av3zdg/webapp 2023-06-08 16:57:51,627 INFO [Listener at localhost.localdomain/45823] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:33813 2023-06-08 16:57:51,628 WARN [Listener at localhost.localdomain/45823] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:57:51,669 WARN [Listener at localhost.localdomain/45823] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:57:51,669 WARN [Listener at localhost.localdomain/45823] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:57:51,694 WARN [Listener at localhost.localdomain/34495] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:57:51,702 WARN [Listener at localhost.localdomain/34495] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:57:51,705 WARN [Listener at localhost.localdomain/34495] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:57:51,707 INFO [Listener at localhost.localdomain/34495] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:57:51,714 INFO [Listener at localhost.localdomain/34495] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/java.io.tmpdir/Jetty_localhost_38859_datanode____.xu60qq/webapp 2023-06-08 16:57:51,788 INFO [Listener at localhost.localdomain/34495] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38859 2023-06-08 16:57:51,796 WARN [Listener at localhost.localdomain/34289] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:57:51,808 WARN [Listener at localhost.localdomain/34289] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:57:51,811 WARN [Listener at localhost.localdomain/34289] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:57:51,812 INFO [Listener at localhost.localdomain/34289] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:57:51,817 INFO [Listener at localhost.localdomain/34289] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/java.io.tmpdir/Jetty_localhost_36807_datanode____yk3opv/webapp 2023-06-08 16:57:51,871 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x21802c0f7d1bdef5: Processing first storage report for DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b from datanode 932545c7-073e-4d40-a6b5-66b0b0fdeba2 2023-06-08 16:57:51,871 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x21802c0f7d1bdef5: from storage DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b node DatanodeRegistration(127.0.0.1:45579, datanodeUuid=932545c7-073e-4d40-a6b5-66b0b0fdeba2, infoPort=32773, infoSecurePort=0, ipcPort=34289, storageInfo=lv=-57;cid=testClusterID;nsid=1720256390;c=1686243471526), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:57:51,871 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x21802c0f7d1bdef5: Processing first storage report for DS-b085d967-0cc7-4499-bab6-2247595bc82a from datanode 932545c7-073e-4d40-a6b5-66b0b0fdeba2 2023-06-08 16:57:51,871 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x21802c0f7d1bdef5: from storage DS-b085d967-0cc7-4499-bab6-2247595bc82a node DatanodeRegistration(127.0.0.1:45579, datanodeUuid=932545c7-073e-4d40-a6b5-66b0b0fdeba2, infoPort=32773, infoSecurePort=0, ipcPort=34289, storageInfo=lv=-57;cid=testClusterID;nsid=1720256390;c=1686243471526), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:57:51,900 INFO [Listener at localhost.localdomain/34289] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36807 2023-06-08 16:57:51,906 WARN [Listener at localhost.localdomain/33511] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:57:51,958 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5841b6e42d6dbe2c: Processing first storage report for DS-c20adba3-ee3d-4404-8208-9bec70188b85 from datanode 229c7354-8dfc-4d51-9ea2-5939476ed88a 2023-06-08 16:57:51,958 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5841b6e42d6dbe2c: from storage DS-c20adba3-ee3d-4404-8208-9bec70188b85 node DatanodeRegistration(127.0.0.1:37463, datanodeUuid=229c7354-8dfc-4d51-9ea2-5939476ed88a, infoPort=35531, infoSecurePort=0, ipcPort=33511, storageInfo=lv=-57;cid=testClusterID;nsid=1720256390;c=1686243471526), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:57:51,958 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5841b6e42d6dbe2c: Processing first storage report for DS-3add22b1-6db6-43f5-8169-cbfb4bad68ae from datanode 229c7354-8dfc-4d51-9ea2-5939476ed88a 2023-06-08 16:57:51,958 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5841b6e42d6dbe2c: from storage DS-3add22b1-6db6-43f5-8169-cbfb4bad68ae node DatanodeRegistration(127.0.0.1:37463, datanodeUuid=229c7354-8dfc-4d51-9ea2-5939476ed88a, infoPort=35531, infoSecurePort=0, ipcPort=33511, storageInfo=lv=-57;cid=testClusterID;nsid=1720256390;c=1686243471526), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-06-08 16:57:52,016 DEBUG [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82 2023-06-08 16:57:52,020 INFO [Listener at localhost.localdomain/33511] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/zookeeper_0, clientPort=64082, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 16:57:52,022 INFO [Listener at localhost.localdomain/33511] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64082 2023-06-08 16:57:52,022 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,023 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,036 INFO [Listener at localhost.localdomain/33511] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa with version=8 2023-06-08 16:57:52,036 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase-staging 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:57:52,038 INFO [Listener at localhost.localdomain/33511] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:57:52,039 INFO [Listener at localhost.localdomain/33511] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46075 2023-06-08 16:57:52,040 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,040 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,041 INFO [Listener at localhost.localdomain/33511] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46075 connecting to ZooKeeper ensemble=127.0.0.1:64082 2023-06-08 16:57:52,047 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:460750x0, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:57:52,048 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46075-0x101cba78abe0000 connected 2023-06-08 16:57:52,062 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:57:52,063 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:57:52,063 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:57:52,064 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46075 2023-06-08 16:57:52,064 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46075 2023-06-08 16:57:52,064 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46075 2023-06-08 16:57:52,065 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46075 2023-06-08 16:57:52,065 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46075 2023-06-08 16:57:52,065 INFO [Listener at localhost.localdomain/33511] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa, hbase.cluster.distributed=false 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:57:52,080 INFO [Listener at localhost.localdomain/33511] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:57:52,082 INFO [Listener at localhost.localdomain/33511] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45795 2023-06-08 16:57:52,082 INFO [Listener at localhost.localdomain/33511] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:57:52,083 DEBUG [Listener at localhost.localdomain/33511] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:57:52,083 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,084 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,085 INFO [Listener at localhost.localdomain/33511] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45795 connecting to ZooKeeper ensemble=127.0.0.1:64082 2023-06-08 16:57:52,087 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:457950x0, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:57:52,088 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ZKUtil(164): regionserver:457950x0, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:57:52,089 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45795-0x101cba78abe0001 connected 2023-06-08 16:57:52,089 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ZKUtil(164): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:57:52,089 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ZKUtil(164): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:57:52,090 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45795 2023-06-08 16:57:52,090 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45795 2023-06-08 16:57:52,090 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45795 2023-06-08 16:57:52,090 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45795 2023-06-08 16:57:52,091 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45795 2023-06-08 16:57:52,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,093 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:57:52,093 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,094 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:57:52,094 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:57:52,094 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,095 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:57:52,096 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,46075,1686243472037 from backup master directory 2023-06-08 16:57:52,096 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:57:52,097 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,097 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:57:52,097 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:57:52,097 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,110 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/hbase.id with ID: 6b320ad2-d6ad-478a-9410-57dd0d2ed55a 2023-06-08 16:57:52,119 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:52,121 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,128 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7eeb3008 to 127.0.0.1:64082 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:57:52,133 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f32e617, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:57:52,133 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:57:52,133 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 16:57:52,134 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:57:52,135 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store-tmp 2023-06-08 16:57:52,144 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:52,144 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:57:52,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:52,144 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:52,144 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:57:52,144 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:52,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:57:52,144 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:57:52,145 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/WALs/jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,147 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46075%2C1686243472037, suffix=, logDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/WALs/jenkins-hbase20.apache.org,46075,1686243472037, archiveDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/oldWALs, maxLogs=10 2023-06-08 16:57:52,154 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/WALs/jenkins-hbase20.apache.org,46075,1686243472037/jenkins-hbase20.apache.org%2C46075%2C1686243472037.1686243472148 2023-06-08 16:57:52,154 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45579,DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b,DISK], DatanodeInfoWithStorage[127.0.0.1:37463,DS-c20adba3-ee3d-4404-8208-9bec70188b85,DISK]] 2023-06-08 16:57:52,154 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:57:52,154 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:52,155 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:57:52,155 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:57:52,156 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:57:52,158 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 16:57:52,158 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 16:57:52,158 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,159 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:57:52,159 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:57:52,162 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:57:52,165 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:57:52,165 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=762617, jitterRate=-0.030283108353614807}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:57:52,165 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:57:52,165 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 16:57:52,166 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 16:57:52,166 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 16:57:52,166 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 16:57:52,167 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 16:57:52,167 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 16:57:52,167 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 16:57:52,168 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 16:57:52,169 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 16:57:52,180 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 16:57:52,180 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 16:57:52,181 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 16:57:52,181 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 16:57:52,182 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 16:57:52,183 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,183 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 16:57:52,184 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 16:57:52,184 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 16:57:52,185 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:57:52,185 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:57:52,185 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,185 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,46075,1686243472037, sessionid=0x101cba78abe0000, setting cluster-up flag (Was=false) 2023-06-08 16:57:52,188 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,190 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 16:57:52,191 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,193 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,196 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 16:57:52,197 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:52,197 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.hbase-snapshot/.tmp 2023-06-08 16:57:52,200 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:57:52,201 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686243502203 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 16:57:52,203 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 16:57:52,204 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,205 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:57:52,205 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 16:57:52,205 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 16:57:52,205 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 16:57:52,205 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 16:57:52,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 16:57:52,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 16:57:52,206 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243472206,5,FailOnTimeoutGroup] 2023-06-08 16:57:52,206 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243472206,5,FailOnTimeoutGroup] 2023-06-08 16:57:52,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 16:57:52,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,206 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,206 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:57:52,217 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:57:52,217 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:57:52,218 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa 2023-06-08 16:57:52,224 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:52,225 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:57:52,226 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info 2023-06-08 16:57:52,226 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:57:52,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:57:52,228 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:57:52,228 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:57:52,229 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,229 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:57:52,230 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/table 2023-06-08 16:57:52,230 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:57:52,231 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,232 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740 2023-06-08 16:57:52,232 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740 2023-06-08 16:57:52,234 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:57:52,235 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:57:52,237 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:57:52,238 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=867030, jitterRate=0.10248617827892303}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:57:52,238 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:57:52,238 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:57:52,238 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:57:52,238 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:57:52,238 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:57:52,238 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:57:52,238 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:57:52,238 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:57:52,239 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:57:52,239 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 16:57:52,240 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 16:57:52,241 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 16:57:52,242 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 16:57:52,293 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(951): ClusterId : 6b320ad2-d6ad-478a-9410-57dd0d2ed55a 2023-06-08 16:57:52,294 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:57:52,298 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:57:52,298 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:57:52,301 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:57:52,303 DEBUG [RS:0;jenkins-hbase20:45795] zookeeper.ReadOnlyZKClient(139): Connect 0x599f4c08 to 127.0.0.1:64082 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:57:52,317 DEBUG [RS:0;jenkins-hbase20:45795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5758b94b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:57:52,318 DEBUG [RS:0;jenkins-hbase20:45795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53672778, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:57:52,327 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:45795 2023-06-08 16:57:52,327 INFO [RS:0;jenkins-hbase20:45795] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:57:52,327 INFO [RS:0;jenkins-hbase20:45795] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:57:52,327 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:57:52,328 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,46075,1686243472037 with isa=jenkins-hbase20.apache.org/148.251.75.209:45795, startcode=1686243472079 2023-06-08 16:57:52,328 DEBUG [RS:0;jenkins-hbase20:45795] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:57:52,331 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40393, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:57:52,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,332 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa 2023-06-08 16:57:52,332 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34495 2023-06-08 16:57:52,332 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:57:52,334 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:57:52,334 DEBUG [RS:0;jenkins-hbase20:45795] zookeeper.ZKUtil(162): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,334 WARN [RS:0;jenkins-hbase20:45795] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:57:52,334 INFO [RS:0;jenkins-hbase20:45795] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:57:52,334 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,335 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45795,1686243472079] 2023-06-08 16:57:52,338 DEBUG [RS:0;jenkins-hbase20:45795] zookeeper.ZKUtil(162): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,339 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:57:52,339 INFO [RS:0;jenkins-hbase20:45795] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:57:52,340 INFO [RS:0;jenkins-hbase20:45795] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:57:52,340 INFO [RS:0;jenkins-hbase20:45795] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:57:52,340 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,340 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:57:52,342 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,342 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,343 DEBUG [RS:0;jenkins-hbase20:45795] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:57:52,344 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,344 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,344 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,354 INFO [RS:0;jenkins-hbase20:45795] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:57:52,354 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45795,1686243472079-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,364 INFO [RS:0;jenkins-hbase20:45795] regionserver.Replication(203): jenkins-hbase20.apache.org,45795,1686243472079 started 2023-06-08 16:57:52,364 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45795,1686243472079, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45795, sessionid=0x101cba78abe0001 2023-06-08 16:57:52,364 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:57:52,364 DEBUG [RS:0;jenkins-hbase20:45795] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,364 DEBUG [RS:0;jenkins-hbase20:45795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45795,1686243472079' 2023-06-08 16:57:52,364 DEBUG [RS:0;jenkins-hbase20:45795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:57:52,364 DEBUG [RS:0;jenkins-hbase20:45795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45795,1686243472079' 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:57:52,365 DEBUG [RS:0;jenkins-hbase20:45795] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:57:52,365 INFO [RS:0;jenkins-hbase20:45795] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:57:52,365 INFO [RS:0;jenkins-hbase20:45795] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:57:52,392 DEBUG [jenkins-hbase20:46075] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 16:57:52,393 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45795,1686243472079, state=OPENING 2023-06-08 16:57:52,394 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 16:57:52,396 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:52,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45795,1686243472079}] 2023-06-08 16:57:52,396 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:57:52,469 INFO [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45795%2C1686243472079, suffix=, logDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079, archiveDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/oldWALs, maxLogs=32 2023-06-08 16:57:52,481 INFO [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243472469 2023-06-08 16:57:52,481 DEBUG [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45579,DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b,DISK], DatanodeInfoWithStorage[127.0.0.1:37463,DS-c20adba3-ee3d-4404-8208-9bec70188b85,DISK]] 2023-06-08 16:57:52,554 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,554 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:57:52,562 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40156, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:57:52,567 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 16:57:52,568 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:57:52,571 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45795%2C1686243472079.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079, archiveDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/oldWALs, maxLogs=32 2023-06-08 16:57:52,582 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.meta.1686243472571.meta 2023-06-08 16:57:52,582 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37463,DS-c20adba3-ee3d-4404-8208-9bec70188b85,DISK], DatanodeInfoWithStorage[127.0.0.1:45579,DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b,DISK]] 2023-06-08 16:57:52,582 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:57:52,582 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 16:57:52,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 16:57:52,583 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 16:57:52,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 16:57:52,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:52,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 16:57:52,583 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 16:57:52,585 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:57:52,586 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info 2023-06-08 16:57:52,586 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info 2023-06-08 16:57:52,586 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:57:52,587 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,587 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:57:52,588 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:57:52,588 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:57:52,588 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:57:52,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:57:52,589 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/table 2023-06-08 16:57:52,589 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/table 2023-06-08 16:57:52,590 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:57:52,590 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,591 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740 2023-06-08 16:57:52,592 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740 2023-06-08 16:57:52,594 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:57:52,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:57:52,596 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=883347, jitterRate=0.12323495745658875}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:57:52,596 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:57:52,598 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686243472554 2023-06-08 16:57:52,602 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 16:57:52,603 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 16:57:52,603 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45795,1686243472079, state=OPEN 2023-06-08 16:57:52,605 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 16:57:52,605 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:57:52,607 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 16:57:52,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45795,1686243472079 in 209 msec 2023-06-08 16:57:52,609 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 16:57:52,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 369 msec 2023-06-08 16:57:52,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 411 msec 2023-06-08 16:57:52,612 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686243472612, completionTime=-1 2023-06-08 16:57:52,612 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 16:57:52,612 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 16:57:52,618 DEBUG [hconnection-0xf5ec640-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:57:52,620 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:57:52,622 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 16:57:52,622 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686243532622 2023-06-08 16:57:52,622 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686243592622 2023-06-08 16:57:52,622 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 10 msec 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46075,1686243472037-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46075,1686243472037-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46075,1686243472037-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:46075, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 16:57:52,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:57:52,629 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 16:57:52,630 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 16:57:52,631 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:57:52,632 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:57:52,634 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,634 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857 empty. 2023-06-08 16:57:52,635 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,635 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 16:57:52,645 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 16:57:52,646 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f60bec9f2c5268f9e3d3e619301ef857, NAME => 'hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp 2023-06-08 16:57:52,654 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:52,654 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f60bec9f2c5268f9e3d3e619301ef857, disabling compactions & flushes 2023-06-08 16:57:52,654 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:52,654 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:52,654 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. after waiting 0 ms 2023-06-08 16:57:52,654 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:52,654 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:52,654 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f60bec9f2c5268f9e3d3e619301ef857: 2023-06-08 16:57:52,656 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:57:52,657 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243472657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243472657"}]},"ts":"1686243472657"} 2023-06-08 16:57:52,659 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:57:52,660 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:57:52,660 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243472660"}]},"ts":"1686243472660"} 2023-06-08 16:57:52,661 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 16:57:52,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f60bec9f2c5268f9e3d3e619301ef857, ASSIGN}] 2023-06-08 16:57:52,667 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f60bec9f2c5268f9e3d3e619301ef857, ASSIGN 2023-06-08 16:57:52,668 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f60bec9f2c5268f9e3d3e619301ef857, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45795,1686243472079; forceNewPlan=false, retain=false 2023-06-08 16:57:52,820 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f60bec9f2c5268f9e3d3e619301ef857, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:52,820 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243472820"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243472820"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243472820"}]},"ts":"1686243472820"} 2023-06-08 16:57:52,824 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure f60bec9f2c5268f9e3d3e619301ef857, server=jenkins-hbase20.apache.org,45795,1686243472079}] 2023-06-08 16:57:52,986 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:52,986 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f60bec9f2c5268f9e3d3e619301ef857, NAME => 'hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:57:52,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:52,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,990 INFO [StoreOpener-f60bec9f2c5268f9e3d3e619301ef857-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,992 DEBUG [StoreOpener-f60bec9f2c5268f9e3d3e619301ef857-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/info 2023-06-08 16:57:52,992 DEBUG [StoreOpener-f60bec9f2c5268f9e3d3e619301ef857-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/info 2023-06-08 16:57:52,992 INFO [StoreOpener-f60bec9f2c5268f9e3d3e619301ef857-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f60bec9f2c5268f9e3d3e619301ef857 columnFamilyName info 2023-06-08 16:57:52,993 INFO [StoreOpener-f60bec9f2c5268f9e3d3e619301ef857-1] regionserver.HStore(310): Store=f60bec9f2c5268f9e3d3e619301ef857/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:52,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:52,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:57:53,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:57:53,000 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f60bec9f2c5268f9e3d3e619301ef857; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=740313, jitterRate=-0.058643996715545654}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:57:53,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f60bec9f2c5268f9e3d3e619301ef857: 2023-06-08 16:57:53,002 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857., pid=6, masterSystemTime=1686243472978 2023-06-08 16:57:53,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:53,005 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:57:53,006 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f60bec9f2c5268f9e3d3e619301ef857, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:53,006 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243473006"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243473006"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243473006"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243473006"}]},"ts":"1686243473006"} 2023-06-08 16:57:53,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 16:57:53,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure f60bec9f2c5268f9e3d3e619301ef857, server=jenkins-hbase20.apache.org,45795,1686243472079 in 184 msec 2023-06-08 16:57:53,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 16:57:53,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f60bec9f2c5268f9e3d3e619301ef857, ASSIGN in 345 msec 2023-06-08 16:57:53,014 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:57:53,014 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243473014"}]},"ts":"1686243473014"} 2023-06-08 16:57:53,015 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 16:57:53,017 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:57:53,019 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 389 msec 2023-06-08 16:57:53,031 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 16:57:53,039 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:57:53,039 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:53,044 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 16:57:53,051 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:57:53,054 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-06-08 16:57:53,066 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 16:57:53,074 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:57:53,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-06-08 16:57:53,090 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 16:57:53,091 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 16:57:53,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.994sec 2023-06-08 16:57:53,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 16:57:53,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 16:57:53,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 16:57:53,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46075,1686243472037-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 16:57:53,091 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46075,1686243472037-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 16:57:53,093 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 16:57:53,093 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ReadOnlyZKClient(139): Connect 0x1191a066 to 127.0.0.1:64082 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:57:53,099 DEBUG [Listener at localhost.localdomain/33511] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f7ef667, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:57:53,100 DEBUG [hconnection-0x3df1f624-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:57:53,102 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40166, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:57:53,103 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:57:53,103 INFO [Listener at localhost.localdomain/33511] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:57:53,108 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 16:57:53,108 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:57:53,109 INFO [Listener at localhost.localdomain/33511] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 16:57:53,111 DEBUG [Listener at localhost.localdomain/33511] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-06-08 16:57:53,115 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44076, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-06-08 16:57:53,117 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-06-08 16:57:53,117 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-06-08 16:57:53,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:57:53,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-06-08 16:57:53,123 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:57:53,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-06-08 16:57:53,124 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:57:53,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:57:53,126 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,127 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973 empty. 2023-06-08 16:57:53,127 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,127 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-06-08 16:57:53,138 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-06-08 16:57:53,140 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3d4cf2b7551519f755e0a5ca6c209973, NAME => 'TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/.tmp 2023-06-08 16:57:53,150 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:53,150 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 3d4cf2b7551519f755e0a5ca6c209973, disabling compactions & flushes 2023-06-08 16:57:53,150 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,150 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,150 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. after waiting 0 ms 2023-06-08 16:57:53,150 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,150 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,150 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:57:53,153 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:57:53,154 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686243473154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243473154"}]},"ts":"1686243473154"} 2023-06-08 16:57:53,155 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:57:53,156 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:57:53,156 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243473156"}]},"ts":"1686243473156"} 2023-06-08 16:57:53,158 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-06-08 16:57:53,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, ASSIGN}] 2023-06-08 16:57:53,162 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, ASSIGN 2023-06-08 16:57:53,163 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45795,1686243472079; forceNewPlan=false, retain=false 2023-06-08 16:57:53,314 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3d4cf2b7551519f755e0a5ca6c209973, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:53,315 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686243473314"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243473314"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243473314"}]},"ts":"1686243473314"} 2023-06-08 16:57:53,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 3d4cf2b7551519f755e0a5ca6c209973, server=jenkins-hbase20.apache.org,45795,1686243472079}] 2023-06-08 16:57:53,478 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3d4cf2b7551519f755e0a5ca6c209973, NAME => 'TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:57:53,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:57:53,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,481 INFO [StoreOpener-3d4cf2b7551519f755e0a5ca6c209973-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,483 DEBUG [StoreOpener-3d4cf2b7551519f755e0a5ca6c209973-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info 2023-06-08 16:57:53,483 DEBUG [StoreOpener-3d4cf2b7551519f755e0a5ca6c209973-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info 2023-06-08 16:57:53,483 INFO [StoreOpener-3d4cf2b7551519f755e0a5ca6c209973-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3d4cf2b7551519f755e0a5ca6c209973 columnFamilyName info 2023-06-08 16:57:53,484 INFO [StoreOpener-3d4cf2b7551519f755e0a5ca6c209973-1] regionserver.HStore(310): Store=3d4cf2b7551519f755e0a5ca6c209973/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:57:53,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:57:53,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:57:53,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 3d4cf2b7551519f755e0a5ca6c209973; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=875938, jitterRate=0.11381359398365021}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:57:53,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:57:53,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973., pid=11, masterSystemTime=1686243473473 2023-06-08 16:57:53,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,495 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:57:53,495 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3d4cf2b7551519f755e0a5ca6c209973, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:57:53,495 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686243473495"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243473495"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243473495"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243473495"}]},"ts":"1686243473495"} 2023-06-08 16:57:53,499 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-06-08 16:57:53,499 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 3d4cf2b7551519f755e0a5ca6c209973, server=jenkins-hbase20.apache.org,45795,1686243472079 in 179 msec 2023-06-08 16:57:53,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-06-08 16:57:53,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, ASSIGN in 339 msec 2023-06-08 16:57:53,502 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:57:53,502 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243473502"}]},"ts":"1686243473502"} 2023-06-08 16:57:53,503 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-06-08 16:57:53,506 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:57:53,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 389 msec 2023-06-08 16:57:56,243 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 16:57:58,339 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-06-08 16:57:58,340 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-06-08 16:57:58,342 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-06-08 16:58:03,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46075] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-06-08 16:58:03,128 INFO [Listener at localhost.localdomain/33511] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-06-08 16:58:03,134 DEBUG [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-06-08 16:58:03,135 DEBUG [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:03,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:03,153 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d4cf2b7551519f755e0a5ca6c209973 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:58:03,165 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/061d3c3f35c4489281a41c533762cbad 2023-06-08 16:58:03,177 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/061d3c3f35c4489281a41c533762cbad as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/061d3c3f35c4489281a41c533762cbad 2023-06-08 16:58:03,183 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/061d3c3f35c4489281a41c533762cbad, entries=7, sequenceid=11, filesize=12.1 K 2023-06-08 16:58:03,183 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 3d4cf2b7551519f755e0a5ca6c209973 in 30ms, sequenceid=11, compaction requested=false 2023-06-08 16:58:03,184 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:03,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:03,185 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d4cf2b7551519f755e0a5ca6c209973 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-08 16:58:03,198 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=34 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/9ce033e01c8a48a68f70719cf3d6db0a 2023-06-08 16:58:03,205 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/9ce033e01c8a48a68f70719cf3d6db0a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a 2023-06-08 16:58:03,211 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a, entries=20, sequenceid=34, filesize=25.8 K 2023-06-08 16:58:03,211 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 3d4cf2b7551519f755e0a5ca6c209973 in 26ms, sequenceid=34, compaction requested=false 2023-06-08 16:58:03,212 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:03,212 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=37.9 K, sizeToCheck=16.0 K 2023-06-08 16:58:03,212 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:58:03,212 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a because midkey is the same as first or last row 2023-06-08 16:58:05,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:05,201 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d4cf2b7551519f755e0a5ca6c209973 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:58:05,222 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=44 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/65ea0eae34c84b17adc3f803edfb1cb5 2023-06-08 16:58:05,229 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/65ea0eae34c84b17adc3f803edfb1cb5 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/65ea0eae34c84b17adc3f803edfb1cb5 2023-06-08 16:58:05,235 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/65ea0eae34c84b17adc3f803edfb1cb5, entries=7, sequenceid=44, filesize=12.1 K 2023-06-08 16:58:05,236 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 3d4cf2b7551519f755e0a5ca6c209973 in 35ms, sequenceid=44, compaction requested=true 2023-06-08 16:58:05,236 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:05,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:05,236 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=50.0 K, sizeToCheck=16.0 K 2023-06-08 16:58:05,236 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:58:05,236 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a because midkey is the same as first or last row 2023-06-08 16:58:05,236 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:05,236 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:05,237 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d4cf2b7551519f755e0a5ca6c209973 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-08 16:58:05,238 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 51218 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:05,239 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 3d4cf2b7551519f755e0a5ca6c209973/info is initiating minor compaction (all files) 2023-06-08 16:58:05,239 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 3d4cf2b7551519f755e0a5ca6c209973/info in TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:05,239 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/061d3c3f35c4489281a41c533762cbad, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/65ea0eae34c84b17adc3f803edfb1cb5] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp, totalSize=50.0 K 2023-06-08 16:58:05,240 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 061d3c3f35c4489281a41c533762cbad, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1686243483141 2023-06-08 16:58:05,241 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 9ce033e01c8a48a68f70719cf3d6db0a, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=34, earliestPutTs=1686243483154 2023-06-08 16:58:05,241 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 65ea0eae34c84b17adc3f803edfb1cb5, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1686243483186 2023-06-08 16:58:05,251 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/58d9a25e09da417083e21ed7f362281a 2023-06-08 16:58:05,262 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=3d4cf2b7551519f755e0a5ca6c209973, server=jenkins-hbase20.apache.org,45795,1686243472079 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 16:58:05,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] ipc.CallRunner(144): callId: 72 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:40166 deadline: 1686243495261, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=3d4cf2b7551519f755e0a5ca6c209973, server=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:05,262 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/58d9a25e09da417083e21ed7f362281a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58d9a25e09da417083e21ed7f362281a 2023-06-08 16:58:05,267 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 3d4cf2b7551519f755e0a5ca6c209973#info#compaction#29 average throughput is 17.44 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:05,268 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58d9a25e09da417083e21ed7f362281a, entries=19, sequenceid=66, filesize=24.7 K 2023-06-08 16:58:05,270 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for 3d4cf2b7551519f755e0a5ca6c209973 in 34ms, sequenceid=66, compaction requested=false 2023-06-08 16:58:05,270 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:05,270 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=74.8 K, sizeToCheck=16.0 K 2023-06-08 16:58:05,270 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:58:05,270 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a because midkey is the same as first or last row 2023-06-08 16:58:05,284 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4 2023-06-08 16:58:05,291 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 3d4cf2b7551519f755e0a5ca6c209973/info of 3d4cf2b7551519f755e0a5ca6c209973 into e5d8a4ac35e646a8a0f0ec6b65a01ae4(size=40.7 K), total size for store is 65.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:05,291 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:05,291 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973., storeName=3d4cf2b7551519f755e0a5ca6c209973/info, priority=13, startTime=1686243485236; duration=0sec 2023-06-08 16:58:05,292 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=65.4 K, sizeToCheck=16.0 K 2023-06-08 16:58:05,292 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:58:05,292 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4 because midkey is the same as first or last row 2023-06-08 16:58:05,292 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:15,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:15,370 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 3d4cf2b7551519f755e0a5ca6c209973 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-06-08 16:58:15,390 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=81 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/58105d08f07646a396400c1ebff65e74 2023-06-08 16:58:15,400 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/58105d08f07646a396400c1ebff65e74 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58105d08f07646a396400c1ebff65e74 2023-06-08 16:58:15,408 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58105d08f07646a396400c1ebff65e74, entries=11, sequenceid=81, filesize=16.3 K 2023-06-08 16:58:15,409 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=0 B/0 for 3d4cf2b7551519f755e0a5ca6c209973 in 39ms, sequenceid=81, compaction requested=true 2023-06-08 16:58:15,409 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:15,409 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=81.7 K, sizeToCheck=16.0 K 2023-06-08 16:58:15,409 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:58:15,409 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4 because midkey is the same as first or last row 2023-06-08 16:58:15,409 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 16:58:15,409 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:15,411 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83687 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:15,411 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 3d4cf2b7551519f755e0a5ca6c209973/info is initiating minor compaction (all files) 2023-06-08 16:58:15,411 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 3d4cf2b7551519f755e0a5ca6c209973/info in TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:15,412 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58d9a25e09da417083e21ed7f362281a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58105d08f07646a396400c1ebff65e74] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp, totalSize=81.7 K 2023-06-08 16:58:15,412 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting e5d8a4ac35e646a8a0f0ec6b65a01ae4, keycount=34, bloomtype=ROW, size=40.7 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1686243483141 2023-06-08 16:58:15,413 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 58d9a25e09da417083e21ed7f362281a, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=66, earliestPutTs=1686243485203 2023-06-08 16:58:15,413 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 58105d08f07646a396400c1ebff65e74, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1686243485237 2023-06-08 16:58:15,430 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 3d4cf2b7551519f755e0a5ca6c209973#info#compaction#31 average throughput is 16.42 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:15,442 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/.tmp/info/15b6a76e4c02401db0095b33fffb879a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a 2023-06-08 16:58:15,449 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 3d4cf2b7551519f755e0a5ca6c209973/info of 3d4cf2b7551519f755e0a5ca6c209973 into 15b6a76e4c02401db0095b33fffb879a(size=72.5 K), total size for store is 72.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:15,449 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:15,449 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973., storeName=3d4cf2b7551519f755e0a5ca6c209973/info, priority=13, startTime=1686243495409; duration=0sec 2023-06-08 16:58:15,449 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.5 K, sizeToCheck=16.0 K 2023-06-08 16:58:15,449 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-06-08 16:58:15,450 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:15,450 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:15,451 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46075] assignment.AssignmentManager(1140): Split request from jenkins-hbase20.apache.org,45795,1686243472079, parent={ENCODED => 3d4cf2b7551519f755e0a5ca6c209973, NAME => 'TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-06-08 16:58:15,457 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46075] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:15,462 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46075] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=3d4cf2b7551519f755e0a5ca6c209973, daughterA=e9416fc1ff249b5555d59c33e3d746db, daughterB=66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:15,463 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=3d4cf2b7551519f755e0a5ca6c209973, daughterA=e9416fc1ff249b5555d59c33e3d746db, daughterB=66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:15,463 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=3d4cf2b7551519f755e0a5ca6c209973, daughterA=e9416fc1ff249b5555d59c33e3d746db, daughterB=66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:15,463 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=3d4cf2b7551519f755e0a5ca6c209973, daughterA=e9416fc1ff249b5555d59c33e3d746db, daughterB=66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:15,471 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, UNASSIGN}] 2023-06-08 16:58:15,473 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, UNASSIGN 2023-06-08 16:58:15,474 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=3d4cf2b7551519f755e0a5ca6c209973, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:15,474 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686243495474"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243495474"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243495474"}]},"ts":"1686243495474"} 2023-06-08 16:58:15,476 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 3d4cf2b7551519f755e0a5ca6c209973, server=jenkins-hbase20.apache.org,45795,1686243472079}] 2023-06-08 16:58:15,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:15,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 3d4cf2b7551519f755e0a5ca6c209973, disabling compactions & flushes 2023-06-08 16:58:15,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:15,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:15,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. after waiting 0 ms 2023-06-08 16:58:15,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:15,648 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/061d3c3f35c4489281a41c533762cbad, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/65ea0eae34c84b17adc3f803edfb1cb5, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58d9a25e09da417083e21ed7f362281a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58105d08f07646a396400c1ebff65e74] to archive 2023-06-08 16:58:15,649 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 16:58:15,650 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/061d3c3f35c4489281a41c533762cbad to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/061d3c3f35c4489281a41c533762cbad 2023-06-08 16:58:15,652 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/9ce033e01c8a48a68f70719cf3d6db0a 2023-06-08 16:58:15,653 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/e5d8a4ac35e646a8a0f0ec6b65a01ae4 2023-06-08 16:58:15,655 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/65ea0eae34c84b17adc3f803edfb1cb5 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/65ea0eae34c84b17adc3f803edfb1cb5 2023-06-08 16:58:15,656 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58d9a25e09da417083e21ed7f362281a to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58d9a25e09da417083e21ed7f362281a 2023-06-08 16:58:15,658 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58105d08f07646a396400c1ebff65e74 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/58105d08f07646a396400c1ebff65e74 2023-06-08 16:58:15,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=1 2023-06-08 16:58:15,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. 2023-06-08 16:58:15,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 3d4cf2b7551519f755e0a5ca6c209973: 2023-06-08 16:58:15,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:15,671 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=3d4cf2b7551519f755e0a5ca6c209973, regionState=CLOSED 2023-06-08 16:58:15,671 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686243495671"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243495671"}]},"ts":"1686243495671"} 2023-06-08 16:58:15,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-06-08 16:58:15,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 3d4cf2b7551519f755e0a5ca6c209973, server=jenkins-hbase20.apache.org,45795,1686243472079 in 197 msec 2023-06-08 16:58:15,679 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-06-08 16:58:15,679 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=3d4cf2b7551519f755e0a5ca6c209973, UNASSIGN in 205 msec 2023-06-08 16:58:15,694 INFO [PEWorker-1] assignment.SplitTableRegionProcedure(694): pid=12 splitting 1 storefiles, region=3d4cf2b7551519f755e0a5ca6c209973, threads=1 2023-06-08 16:58:15,696 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a for region: 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:15,722 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a for region: 3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:15,722 DEBUG [PEWorker-1] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 3d4cf2b7551519f755e0a5ca6c209973 Daughter A: 1 storefiles, Daughter B: 1 storefiles. 2023-06-08 16:58:15,751 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-06-08 16:58:15,753 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-06-08 16:58:15,755 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1686243495754"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1686243495754"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1686243495754"}]},"ts":"1686243495754"} 2023-06-08 16:58:15,755 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686243495754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243495754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243495754"}]},"ts":"1686243495754"} 2023-06-08 16:58:15,755 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686243495754"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243495754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243495754"}]},"ts":"1686243495754"} 2023-06-08 16:58:15,788 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45795] regionserver.HRegion(9158): Flush requested on 1588230740 2023-06-08 16:58:15,789 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-06-08 16:58:15,789 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-06-08 16:58:15,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e9416fc1ff249b5555d59c33e3d746db, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=66a0067d300bacd28279aeeefe7744ac, ASSIGN}] 2023-06-08 16:58:15,799 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/.tmp/info/9bff93aeae6c43dcae86314acb330697 2023-06-08 16:58:15,799 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=66a0067d300bacd28279aeeefe7744ac, ASSIGN 2023-06-08 16:58:15,799 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e9416fc1ff249b5555d59c33e3d746db, ASSIGN 2023-06-08 16:58:15,800 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=66a0067d300bacd28279aeeefe7744ac, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,45795,1686243472079; forceNewPlan=false, retain=false 2023-06-08 16:58:15,800 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e9416fc1ff249b5555d59c33e3d746db, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,45795,1686243472079; forceNewPlan=false, retain=false 2023-06-08 16:58:15,814 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/.tmp/table/d7d70dd46f4f4d11b8b71d195d747978 2023-06-08 16:58:15,820 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/.tmp/info/9bff93aeae6c43dcae86314acb330697 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info/9bff93aeae6c43dcae86314acb330697 2023-06-08 16:58:15,825 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info/9bff93aeae6c43dcae86314acb330697, entries=29, sequenceid=17, filesize=8.6 K 2023-06-08 16:58:15,826 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/.tmp/table/d7d70dd46f4f4d11b8b71d195d747978 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/table/d7d70dd46f4f4d11b8b71d195d747978 2023-06-08 16:58:15,831 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/table/d7d70dd46f4f4d11b8b71d195d747978, entries=4, sequenceid=17, filesize=4.8 K 2023-06-08 16:58:15,832 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 43ms, sequenceid=17, compaction requested=false 2023-06-08 16:58:15,833 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-08 16:58:15,952 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=66a0067d300bacd28279aeeefe7744ac, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:15,952 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=e9416fc1ff249b5555d59c33e3d746db, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:15,952 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686243495952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243495952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243495952"}]},"ts":"1686243495952"} 2023-06-08 16:58:15,952 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686243495952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243495952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243495952"}]},"ts":"1686243495952"} 2023-06-08 16:58:15,954 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 66a0067d300bacd28279aeeefe7744ac, server=jenkins-hbase20.apache.org,45795,1686243472079}] 2023-06-08 16:58:15,955 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure e9416fc1ff249b5555d59c33e3d746db, server=jenkins-hbase20.apache.org,45795,1686243472079}] 2023-06-08 16:58:16,116 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:58:16,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e9416fc1ff249b5555d59c33e3d746db, NAME => 'TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.', STARTKEY => '', ENDKEY => 'row0062'} 2023-06-08 16:58:16,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:58:16,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,122 INFO [StoreOpener-e9416fc1ff249b5555d59c33e3d746db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,123 DEBUG [StoreOpener-e9416fc1ff249b5555d59c33e3d746db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info 2023-06-08 16:58:16,123 DEBUG [StoreOpener-e9416fc1ff249b5555d59c33e3d746db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info 2023-06-08 16:58:16,123 INFO [StoreOpener-e9416fc1ff249b5555d59c33e3d746db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e9416fc1ff249b5555d59c33e3d746db columnFamilyName info 2023-06-08 16:58:16,138 DEBUG [StoreOpener-e9416fc1ff249b5555d59c33e3d746db-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973->hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a-bottom 2023-06-08 16:58:16,138 INFO [StoreOpener-e9416fc1ff249b5555d59c33e3d746db-1] regionserver.HStore(310): Store=e9416fc1ff249b5555d59c33e3d746db/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:58:16,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:16,144 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e9416fc1ff249b5555d59c33e3d746db; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=694820, jitterRate=-0.11649183928966522}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:58:16,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e9416fc1ff249b5555d59c33e3d746db: 2023-06-08 16:58:16,145 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db., pid=18, masterSystemTime=1686243496107 2023-06-08 16:58:16,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 16:58:16,146 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-08 16:58:16,146 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:58:16,146 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): e9416fc1ff249b5555d59c33e3d746db/info is initiating minor compaction (all files) 2023-06-08 16:58:16,146 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e9416fc1ff249b5555d59c33e3d746db/info in TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:58:16,147 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973->hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a-bottom] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/.tmp, totalSize=72.5 K 2023-06-08 16:58:16,147 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1686243483141 2023-06-08 16:58:16,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:58:16,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:58:16,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:16,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 66a0067d300bacd28279aeeefe7744ac, NAME => 'TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.', STARTKEY => 'row0062', ENDKEY => ''} 2023-06-08 16:58:16,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:58:16,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,148 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=e9416fc1ff249b5555d59c33e3d746db, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:16,148 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686243496148"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243496148"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243496148"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243496148"}]},"ts":"1686243496148"} 2023-06-08 16:58:16,149 INFO [StoreOpener-66a0067d300bacd28279aeeefe7744ac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,151 DEBUG [StoreOpener-66a0067d300bacd28279aeeefe7744ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info 2023-06-08 16:58:16,151 DEBUG [StoreOpener-66a0067d300bacd28279aeeefe7744ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info 2023-06-08 16:58:16,151 INFO [StoreOpener-66a0067d300bacd28279aeeefe7744ac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 66a0067d300bacd28279aeeefe7744ac columnFamilyName info 2023-06-08 16:58:16,153 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-06-08 16:58:16,153 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure e9416fc1ff249b5555d59c33e3d746db, server=jenkins-hbase20.apache.org,45795,1686243472079 in 195 msec 2023-06-08 16:58:16,155 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e9416fc1ff249b5555d59c33e3d746db, ASSIGN in 355 msec 2023-06-08 16:58:16,155 DEBUG [RS:0;jenkins-hbase20:45795] compactions.CompactionProgress(74): totalCompactingKVs=32 less than currentCompactedKVs=39 2023-06-08 16:58:16,157 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): e9416fc1ff249b5555d59c33e3d746db#info#compaction#34 average throughput is 12.52 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:16,161 DEBUG [StoreOpener-66a0067d300bacd28279aeeefe7744ac-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973->hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a-top 2023-06-08 16:58:16,164 INFO [StoreOpener-66a0067d300bacd28279aeeefe7744ac-1] regionserver.HStore(310): Store=66a0067d300bacd28279aeeefe7744ac/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:58:16,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:16,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 66a0067d300bacd28279aeeefe7744ac; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807445, jitterRate=0.026720404624938965}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:58:16,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:16,174 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., pid=17, masterSystemTime=1686243496107 2023-06-08 16:58:16,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:16,177 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-06-08 16:58:16,177 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:16,177 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:16,178 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:16,178 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973->hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a-top] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=72.5 K 2023-06-08 16:58:16,178 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting 15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1686243483141 2023-06-08 16:58:16,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:16,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:16,180 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=66a0067d300bacd28279aeeefe7744ac, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:16,180 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1686243496180"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243496180"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243496180"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243496180"}]},"ts":"1686243496180"} 2023-06-08 16:58:16,181 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/.tmp/info/e7324b2c45814289baf8b8951af5c4cb as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info/e7324b2c45814289baf8b8951af5c4cb 2023-06-08 16:58:16,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-06-08 16:58:16,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 66a0067d300bacd28279aeeefe7744ac, server=jenkins-hbase20.apache.org,45795,1686243472079 in 228 msec 2023-06-08 16:58:16,186 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#35 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:16,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-06-08 16:58:16,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=66a0067d300bacd28279aeeefe7744ac, ASSIGN in 387 msec 2023-06-08 16:58:16,189 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=3d4cf2b7551519f755e0a5ca6c209973, daughterA=e9416fc1ff249b5555d59c33e3d746db, daughterB=66a0067d300bacd28279aeeefe7744ac in 730 msec 2023-06-08 16:58:16,189 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in e9416fc1ff249b5555d59c33e3d746db/info of e9416fc1ff249b5555d59c33e3d746db into e7324b2c45814289baf8b8951af5c4cb(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:16,189 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e9416fc1ff249b5555d59c33e3d746db: 2023-06-08 16:58:16,189 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db., storeName=e9416fc1ff249b5555d59c33e3d746db/info, priority=15, startTime=1686243496145; duration=0sec 2023-06-08 16:58:16,189 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:16,202 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/3cabca768aba4d0ca2c03066dd09d99b as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3cabca768aba4d0ca2c03066dd09d99b 2023-06-08 16:58:16,210 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into 3cabca768aba4d0ca2c03066dd09d99b(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:16,210 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:16,210 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=15, startTime=1686243496174; duration=0sec 2023-06-08 16:58:16,210 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:17,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:40166 deadline: 1686243507372, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1686243473117.3d4cf2b7551519f755e0a5ca6c209973. is not online on jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:21,229 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 16:58:27,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:27,424 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:58:27,439 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=96 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/e97407cf9822470784c343a447ea4031 2023-06-08 16:58:27,445 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/e97407cf9822470784c343a447ea4031 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/e97407cf9822470784c343a447ea4031 2023-06-08 16:58:27,451 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/e97407cf9822470784c343a447ea4031, entries=7, sequenceid=96, filesize=12.1 K 2023-06-08 16:58:27,452 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=21.02 KB/21520 for 66a0067d300bacd28279aeeefe7744ac in 28ms, sequenceid=96, compaction requested=false 2023-06-08 16:58:27,452 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:27,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:27,453 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-06-08 16:58:27,464 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=121 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/3d5e3d25e7c946028df31e27eca56b36 2023-06-08 16:58:27,471 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/3d5e3d25e7c946028df31e27eca56b36 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3d5e3d25e7c946028df31e27eca56b36 2023-06-08 16:58:27,477 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3d5e3d25e7c946028df31e27eca56b36, entries=22, sequenceid=121, filesize=27.9 K 2023-06-08 16:58:27,478 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=3.15 KB/3228 for 66a0067d300bacd28279aeeefe7744ac in 25ms, sequenceid=121, compaction requested=true 2023-06-08 16:58:27,478 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:27,478 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:27,478 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:27,479 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 49123 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:27,479 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:27,479 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:27,479 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3cabca768aba4d0ca2c03066dd09d99b, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/e97407cf9822470784c343a447ea4031, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3d5e3d25e7c946028df31e27eca56b36] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=48.0 K 2023-06-08 16:58:27,480 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 3cabca768aba4d0ca2c03066dd09d99b, keycount=3, bloomtype=ROW, size=8.0 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1686243485258 2023-06-08 16:58:27,480 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting e97407cf9822470784c343a447ea4031, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=96, earliestPutTs=1686243507414 2023-06-08 16:58:27,481 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 3d5e3d25e7c946028df31e27eca56b36, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=121, earliestPutTs=1686243507424 2023-06-08 16:58:27,491 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#38 average throughput is 32.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:27,505 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/05cc7a0fa385496688d3954bf84b4704 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/05cc7a0fa385496688d3954bf84b4704 2023-06-08 16:58:27,511 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into 05cc7a0fa385496688d3954bf84b4704(size=38.6 K), total size for store is 38.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:27,511 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:27,511 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243507478; duration=0sec 2023-06-08 16:58:27,511 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:29,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:29,471 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:58:29,483 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/2c8e9ddd66c142b0ab82f8462bd303a9 2023-06-08 16:58:29,488 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/2c8e9ddd66c142b0ab82f8462bd303a9 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/2c8e9ddd66c142b0ab82f8462bd303a9 2023-06-08 16:58:29,495 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/2c8e9ddd66c142b0ab82f8462bd303a9, entries=7, sequenceid=132, filesize=12.1 K 2023-06-08 16:58:29,496 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 66a0067d300bacd28279aeeefe7744ac in 25ms, sequenceid=132, compaction requested=false 2023-06-08 16:58:29,496 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:29,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:29,497 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-06-08 16:58:29,508 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=155 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/cd3477ae3b4048bc8e4dc5166c1a7f4b 2023-06-08 16:58:29,513 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/cd3477ae3b4048bc8e4dc5166c1a7f4b as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/cd3477ae3b4048bc8e4dc5166c1a7f4b 2023-06-08 16:58:29,518 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/cd3477ae3b4048bc8e4dc5166c1a7f4b, entries=20, sequenceid=155, filesize=25.8 K 2023-06-08 16:58:29,519 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=8.41 KB/8608 for 66a0067d300bacd28279aeeefe7744ac in 22ms, sequenceid=155, compaction requested=true 2023-06-08 16:58:29,519 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:29,519 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 16:58:29,519 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:29,521 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 78394 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:29,521 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:29,521 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:29,521 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/05cc7a0fa385496688d3954bf84b4704, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/2c8e9ddd66c142b0ab82f8462bd303a9, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/cd3477ae3b4048bc8e4dc5166c1a7f4b] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=76.6 K 2023-06-08 16:58:29,521 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 05cc7a0fa385496688d3954bf84b4704, keycount=32, bloomtype=ROW, size=38.6 K, encoding=NONE, compression=NONE, seqNum=121, earliestPutTs=1686243485258 2023-06-08 16:58:29,522 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 2c8e9ddd66c142b0ab82f8462bd303a9, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1686243507454 2023-06-08 16:58:29,522 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting cd3477ae3b4048bc8e4dc5166c1a7f4b, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=155, earliestPutTs=1686243509471 2023-06-08 16:58:29,536 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#41 average throughput is 60.54 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:29,552 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/0dc88df3697e4498b8baaaa47d12e65a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/0dc88df3697e4498b8baaaa47d12e65a 2023-06-08 16:58:29,560 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into 0dc88df3697e4498b8baaaa47d12e65a(size=67.2 K), total size for store is 67.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:29,560 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:29,561 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243509519; duration=0sec 2023-06-08 16:58:29,561 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:31,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:31,513 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-06-08 16:58:31,527 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=168 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/3681a0770a1643c0a9a2ff2fd962439f 2023-06-08 16:58:31,535 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/3681a0770a1643c0a9a2ff2fd962439f as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3681a0770a1643c0a9a2ff2fd962439f 2023-06-08 16:58:31,542 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3681a0770a1643c0a9a2ff2fd962439f, entries=9, sequenceid=168, filesize=14.2 K 2023-06-08 16:58:31,543 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=17.86 KB/18292 for 66a0067d300bacd28279aeeefe7744ac in 31ms, sequenceid=168, compaction requested=false 2023-06-08 16:58:31,543 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:31,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:31,544 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-06-08 16:58:31,559 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=66a0067d300bacd28279aeeefe7744ac, server=jenkins-hbase20.apache.org,45795,1686243472079 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 16:58:31,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] ipc.CallRunner(144): callId: 173 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:40166 deadline: 1686243521559, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=66a0067d300bacd28279aeeefe7744ac, server=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:31,564 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=189 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/fbaf355f446244b4994877f937bbda8b 2023-06-08 16:58:31,569 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/fbaf355f446244b4994877f937bbda8b as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/fbaf355f446244b4994877f937bbda8b 2023-06-08 16:58:31,574 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/fbaf355f446244b4994877f937bbda8b, entries=18, sequenceid=189, filesize=23.7 K 2023-06-08 16:58:31,575 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 66a0067d300bacd28279aeeefe7744ac in 31ms, sequenceid=189, compaction requested=true 2023-06-08 16:58:31,575 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:31,575 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 16:58:31,575 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:31,577 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 107670 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:31,577 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:31,577 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:31,577 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/0dc88df3697e4498b8baaaa47d12e65a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3681a0770a1643c0a9a2ff2fd962439f, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/fbaf355f446244b4994877f937bbda8b] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=105.1 K 2023-06-08 16:58:31,577 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 0dc88df3697e4498b8baaaa47d12e65a, keycount=59, bloomtype=ROW, size=67.2 K, encoding=NONE, compression=NONE, seqNum=155, earliestPutTs=1686243485258 2023-06-08 16:58:31,578 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 3681a0770a1643c0a9a2ff2fd962439f, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=168, earliestPutTs=1686243509497 2023-06-08 16:58:31,578 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting fbaf355f446244b4994877f937bbda8b, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=189, earliestPutTs=1686243511515 2023-06-08 16:58:31,587 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#44 average throughput is 88.25 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:31,596 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/80df77c07fab433bbeb6117c940cf686 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/80df77c07fab433bbeb6117c940cf686 2023-06-08 16:58:31,602 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into 80df77c07fab433bbeb6117c940cf686(size=95.8 K), total size for store is 95.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:31,602 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:31,602 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243511575; duration=0sec 2023-06-08 16:58:31,602 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:37,790 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-06-08 16:58:37,790 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=34, reuseRatio=72.34% 2023-06-08 16:58:41,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:41,579 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-06-08 16:58:41,589 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=205 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/613ef7f7b1414253b6795c0fadc70ea9 2023-06-08 16:58:41,596 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/613ef7f7b1414253b6795c0fadc70ea9 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/613ef7f7b1414253b6795c0fadc70ea9 2023-06-08 16:58:41,601 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/613ef7f7b1414253b6795c0fadc70ea9, entries=12, sequenceid=205, filesize=17.4 K 2023-06-08 16:58:41,602 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 66a0067d300bacd28279aeeefe7744ac in 23ms, sequenceid=205, compaction requested=false 2023-06-08 16:58:41,602 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:43,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:43,593 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:58:43,609 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=215 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/db6f70144ede4175919603d73c5c1f8a 2023-06-08 16:58:43,615 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/db6f70144ede4175919603d73c5c1f8a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/db6f70144ede4175919603d73c5c1f8a 2023-06-08 16:58:43,620 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/db6f70144ede4175919603d73c5c1f8a, entries=7, sequenceid=215, filesize=12.1 K 2023-06-08 16:58:43,620 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 66a0067d300bacd28279aeeefe7744ac in 27ms, sequenceid=215, compaction requested=true 2023-06-08 16:58:43,621 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:43,621 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:43,621 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:43,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:43,621 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-06-08 16:58:43,622 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 128311 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:43,622 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:43,622 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:43,622 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/80df77c07fab433bbeb6117c940cf686, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/613ef7f7b1414253b6795c0fadc70ea9, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/db6f70144ede4175919603d73c5c1f8a] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=125.3 K 2023-06-08 16:58:43,622 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting 80df77c07fab433bbeb6117c940cf686, keycount=86, bloomtype=ROW, size=95.8 K, encoding=NONE, compression=NONE, seqNum=189, earliestPutTs=1686243485258 2023-06-08 16:58:43,623 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting 613ef7f7b1414253b6795c0fadc70ea9, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=205, earliestPutTs=1686243511544 2023-06-08 16:58:43,623 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting db6f70144ede4175919603d73c5c1f8a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=215, earliestPutTs=1686243521580 2023-06-08 16:58:43,634 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=241 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/22371f579b114090b6d0018c2b9c6953 2023-06-08 16:58:43,638 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#48 average throughput is 107.75 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:43,640 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/22371f579b114090b6d0018c2b9c6953 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/22371f579b114090b6d0018c2b9c6953 2023-06-08 16:58:43,648 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/22371f579b114090b6d0018c2b9c6953, entries=23, sequenceid=241, filesize=29.0 K 2023-06-08 16:58:43,649 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=3.15 KB/3228 for 66a0067d300bacd28279aeeefe7744ac in 28ms, sequenceid=241, compaction requested=false 2023-06-08 16:58:43,649 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:43,653 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/b0b963a298ef4d2ea60c95bb93ac9845 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b0b963a298ef4d2ea60c95bb93ac9845 2023-06-08 16:58:43,659 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into b0b963a298ef4d2ea60c95bb93ac9845(size=115.9 K), total size for store is 144.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:43,659 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:43,659 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243523621; duration=0sec 2023-06-08 16:58:43,659 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:44,779 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-06-08 16:58:45,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:45,632 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-06-08 16:58:45,642 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=252 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/97d490015dce4eb6bde5eb84b1e18a4a 2023-06-08 16:58:45,649 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/97d490015dce4eb6bde5eb84b1e18a4a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/97d490015dce4eb6bde5eb84b1e18a4a 2023-06-08 16:58:45,654 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/97d490015dce4eb6bde5eb84b1e18a4a, entries=7, sequenceid=252, filesize=12.1 K 2023-06-08 16:58:45,655 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 66a0067d300bacd28279aeeefe7744ac in 23ms, sequenceid=252, compaction requested=true 2023-06-08 16:58:45,655 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:45,655 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 16:58:45,655 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:45,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:45,656 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-06-08 16:58:45,656 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 160777 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:45,656 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:45,656 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:45,657 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b0b963a298ef4d2ea60c95bb93ac9845, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/22371f579b114090b6d0018c2b9c6953, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/97d490015dce4eb6bde5eb84b1e18a4a] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=157.0 K 2023-06-08 16:58:45,657 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting b0b963a298ef4d2ea60c95bb93ac9845, keycount=105, bloomtype=ROW, size=115.9 K, encoding=NONE, compression=NONE, seqNum=215, earliestPutTs=1686243485258 2023-06-08 16:58:45,657 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 22371f579b114090b6d0018c2b9c6953, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=241, earliestPutTs=1686243523593 2023-06-08 16:58:45,658 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 97d490015dce4eb6bde5eb84b1e18a4a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=252, earliestPutTs=1686243523621 2023-06-08 16:58:45,670 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=274 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/60809af9115e443ba508ccb5ad736419 2023-06-08 16:58:45,680 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#51 average throughput is 46.18 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:45,681 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/60809af9115e443ba508ccb5ad736419 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/60809af9115e443ba508ccb5ad736419 2023-06-08 16:58:45,690 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/60809af9115e443ba508ccb5ad736419, entries=19, sequenceid=274, filesize=24.8 K 2023-06-08 16:58:45,691 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=9.46 KB/9684 for 66a0067d300bacd28279aeeefe7744ac in 35ms, sequenceid=274, compaction requested=false 2023-06-08 16:58:45,691 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:45,693 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/b7b9497c1e454766b72564cd4ab8c027 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b7b9497c1e454766b72564cd4ab8c027 2023-06-08 16:58:45,699 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into b7b9497c1e454766b72564cd4ab8c027(size=147.8 K), total size for store is 172.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:45,700 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:45,700 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243525655; duration=0sec 2023-06-08 16:58:45,700 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:47,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:47,670 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-06-08 16:58:47,683 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=288 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/52a9f758afc94651877514a0f52c5db2 2023-06-08 16:58:47,691 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/52a9f758afc94651877514a0f52c5db2 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/52a9f758afc94651877514a0f52c5db2 2023-06-08 16:58:47,698 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/52a9f758afc94651877514a0f52c5db2, entries=10, sequenceid=288, filesize=15.3 K 2023-06-08 16:58:47,699 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=17.86 KB/18292 for 66a0067d300bacd28279aeeefe7744ac in 29ms, sequenceid=288, compaction requested=true 2023-06-08 16:58:47,699 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:47,699 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:47,699 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:47,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:47,699 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-06-08 16:58:47,700 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 192373 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:47,700 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:47,700 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:47,700 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b7b9497c1e454766b72564cd4ab8c027, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/60809af9115e443ba508ccb5ad736419, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/52a9f758afc94651877514a0f52c5db2] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=187.9 K 2023-06-08 16:58:47,701 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting b7b9497c1e454766b72564cd4ab8c027, keycount=135, bloomtype=ROW, size=147.8 K, encoding=NONE, compression=NONE, seqNum=252, earliestPutTs=1686243485258 2023-06-08 16:58:47,701 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 60809af9115e443ba508ccb5ad736419, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=274, earliestPutTs=1686243525633 2023-06-08 16:58:47,701 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] compactions.Compactor(207): Compacting 52a9f758afc94651877514a0f52c5db2, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=288, earliestPutTs=1686243525657 2023-06-08 16:58:47,714 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=66a0067d300bacd28279aeeefe7744ac, server=jenkins-hbase20.apache.org,45795,1686243472079 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-06-08 16:58:47,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] ipc.CallRunner(144): callId: 271 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:40166 deadline: 1686243537714, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=66a0067d300bacd28279aeeefe7744ac, server=jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:47,716 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#54 average throughput is 56.10 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:47,730 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/44acebb05d834d6eb5366b465ac75eb1 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/44acebb05d834d6eb5366b465ac75eb1 2023-06-08 16:58:47,736 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into 44acebb05d834d6eb5366b465ac75eb1(size=178.5 K), total size for store is 178.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:47,736 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:47,736 INFO [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243527699; duration=0sec 2023-06-08 16:58:47,737 DEBUG [RS:0;jenkins-hbase20:45795-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:48,117 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=309 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/033a6af10cf241ac80f0d424760ed4ac 2023-06-08 16:58:48,128 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/033a6af10cf241ac80f0d424760ed4ac as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/033a6af10cf241ac80f0d424760ed4ac 2023-06-08 16:58:48,136 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/033a6af10cf241ac80f0d424760ed4ac, entries=18, sequenceid=309, filesize=23.7 K 2023-06-08 16:58:48,137 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 66a0067d300bacd28279aeeefe7744ac in 438ms, sequenceid=309, compaction requested=false 2023-06-08 16:58:48,137 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:57,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45795] regionserver.HRegion(9158): Flush requested on 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:57,726 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-06-08 16:58:57,740 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=325 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/7719939cfa6d4688a568a307b1842a6a 2023-06-08 16:58:57,752 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/7719939cfa6d4688a568a307b1842a6a as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/7719939cfa6d4688a568a307b1842a6a 2023-06-08 16:58:57,759 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/7719939cfa6d4688a568a307b1842a6a, entries=12, sequenceid=325, filesize=17.4 K 2023-06-08 16:58:57,760 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 66a0067d300bacd28279aeeefe7744ac in 35ms, sequenceid=325, compaction requested=true 2023-06-08 16:58:57,760 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:57,760 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-06-08 16:58:57,760 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-06-08 16:58:57,761 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 224853 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-06-08 16:58:57,761 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1912): 66a0067d300bacd28279aeeefe7744ac/info is initiating minor compaction (all files) 2023-06-08 16:58:57,761 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 66a0067d300bacd28279aeeefe7744ac/info in TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:57,761 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/44acebb05d834d6eb5366b465ac75eb1, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/033a6af10cf241ac80f0d424760ed4ac, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/7719939cfa6d4688a568a307b1842a6a] into tmpdir=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp, totalSize=219.6 K 2023-06-08 16:58:57,762 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting 44acebb05d834d6eb5366b465ac75eb1, keycount=164, bloomtype=ROW, size=178.5 K, encoding=NONE, compression=NONE, seqNum=288, earliestPutTs=1686243485258 2023-06-08 16:58:57,762 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting 033a6af10cf241ac80f0d424760ed4ac, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=309, earliestPutTs=1686243527671 2023-06-08 16:58:57,762 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] compactions.Compactor(207): Compacting 7719939cfa6d4688a568a307b1842a6a, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=325, earliestPutTs=1686243527700 2023-06-08 16:58:57,774 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] throttle.PressureAwareThroughputController(145): 66a0067d300bacd28279aeeefe7744ac#info#compaction#56 average throughput is 66.36 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-06-08 16:58:57,784 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/dfb3e057ea59483c82d5c8e5eafe7cc9 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/dfb3e057ea59483c82d5c8e5eafe7cc9 2023-06-08 16:58:57,790 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 66a0067d300bacd28279aeeefe7744ac/info of 66a0067d300bacd28279aeeefe7744ac into dfb3e057ea59483c82d5c8e5eafe7cc9(size=210.2 K), total size for store is 210.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-06-08 16:58:57,790 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:57,790 INFO [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., storeName=66a0067d300bacd28279aeeefe7744ac/info, priority=13, startTime=1686243537760; duration=0sec 2023-06-08 16:58:57,790 DEBUG [RS:0;jenkins-hbase20:45795-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-06-08 16:58:59,729 INFO [Listener at localhost.localdomain/33511] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-06-08 16:58:59,760 INFO [Listener at localhost.localdomain/33511] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243472469 with entries=312, filesize=307.75 KB; new WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243539730 2023-06-08 16:58:59,760 DEBUG [Listener at localhost.localdomain/33511] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37463,DS-c20adba3-ee3d-4404-8208-9bec70188b85,DISK], DatanodeInfoWithStorage[127.0.0.1:45579,DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b,DISK]] 2023-06-08 16:58:59,761 DEBUG [Listener at localhost.localdomain/33511] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243472469 is not closed yet, will try archiving it next time 2023-06-08 16:58:59,769 INFO [Listener at localhost.localdomain/33511] regionserver.HRegion(2745): Flushing 66a0067d300bacd28279aeeefe7744ac 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-06-08 16:58:59,781 INFO [Listener at localhost.localdomain/33511] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=330 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/926fcbc5013146cdaa4283c0b5e09fea 2023-06-08 16:58:59,787 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/.tmp/info/926fcbc5013146cdaa4283c0b5e09fea as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/926fcbc5013146cdaa4283c0b5e09fea 2023-06-08 16:58:59,793 INFO [Listener at localhost.localdomain/33511] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/926fcbc5013146cdaa4283c0b5e09fea, entries=1, sequenceid=330, filesize=5.8 K 2023-06-08 16:58:59,794 INFO [Listener at localhost.localdomain/33511] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 66a0067d300bacd28279aeeefe7744ac in 25ms, sequenceid=330, compaction requested=false 2023-06-08 16:58:59,794 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegion(2446): Flush status journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:58:59,795 INFO [Listener at localhost.localdomain/33511] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-06-08 16:58:59,803 INFO [Listener at localhost.localdomain/33511] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/.tmp/info/48d8d6207f9e46dcbe5d8f09ce688bbb 2023-06-08 16:58:59,808 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/.tmp/info/48d8d6207f9e46dcbe5d8f09ce688bbb as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info/48d8d6207f9e46dcbe5d8f09ce688bbb 2023-06-08 16:58:59,813 INFO [Listener at localhost.localdomain/33511] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/info/48d8d6207f9e46dcbe5d8f09ce688bbb, entries=16, sequenceid=24, filesize=7.0 K 2023-06-08 16:58:59,814 INFO [Listener at localhost.localdomain/33511] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 19ms, sequenceid=24, compaction requested=false 2023-06-08 16:58:59,814 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-06-08 16:58:59,814 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegion(2446): Flush status journal for e9416fc1ff249b5555d59c33e3d746db: 2023-06-08 16:58:59,814 INFO [Listener at localhost.localdomain/33511] regionserver.HRegion(2745): Flushing f60bec9f2c5268f9e3d3e619301ef857 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 16:58:59,824 INFO [Listener at localhost.localdomain/33511] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/.tmp/info/937d67352dfe4bf685b05385bbc10112 2023-06-08 16:58:59,829 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/.tmp/info/937d67352dfe4bf685b05385bbc10112 as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/info/937d67352dfe4bf685b05385bbc10112 2023-06-08 16:58:59,833 INFO [Listener at localhost.localdomain/33511] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/info/937d67352dfe4bf685b05385bbc10112, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 16:58:59,834 INFO [Listener at localhost.localdomain/33511] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for f60bec9f2c5268f9e3d3e619301ef857 in 20ms, sequenceid=6, compaction requested=false 2023-06-08 16:58:59,834 DEBUG [Listener at localhost.localdomain/33511] regionserver.HRegion(2446): Flush status journal for f60bec9f2c5268f9e3d3e619301ef857: 2023-06-08 16:58:59,842 INFO [Listener at localhost.localdomain/33511] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243539730 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243539835 2023-06-08 16:58:59,843 DEBUG [Listener at localhost.localdomain/33511] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37463,DS-c20adba3-ee3d-4404-8208-9bec70188b85,DISK], DatanodeInfoWithStorage[127.0.0.1:45579,DS-097edd3d-5c71-4ff7-8e5e-9ec9445a560b,DISK]] 2023-06-08 16:58:59,843 DEBUG [Listener at localhost.localdomain/33511] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243539730 is not closed yet, will try archiving it next time 2023-06-08 16:58:59,843 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243472469 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/oldWALs/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243472469 2023-06-08 16:58:59,844 INFO [Listener at localhost.localdomain/33511] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-06-08 16:58:59,845 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243539730 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/oldWALs/jenkins-hbase20.apache.org%2C45795%2C1686243472079.1686243539730 2023-06-08 16:58:59,944 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 16:58:59,944 INFO [Listener at localhost.localdomain/33511] client.ConnectionImplementation(1980): Closing master protocol: MasterService 2023-06-08 16:58:59,945 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1191a066 to 127.0.0.1:64082 2023-06-08 16:58:59,945 DEBUG [Listener at localhost.localdomain/33511] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:58:59,945 DEBUG [Listener at localhost.localdomain/33511] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 16:58:59,945 DEBUG [Listener at localhost.localdomain/33511] util.JVMClusterUtil(257): Found active master hash=1854364310, stopped=false 2023-06-08 16:58:59,945 INFO [Listener at localhost.localdomain/33511] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:58:59,948 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:58:59,948 INFO [Listener at localhost.localdomain/33511] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 16:58:59,948 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:58:59,948 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:58:59,950 DEBUG [Listener at localhost.localdomain/33511] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7eeb3008 to 127.0.0.1:64082 2023-06-08 16:58:59,950 DEBUG [Listener at localhost.localdomain/33511] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:58:59,950 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:58:59,950 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:58:59,950 INFO [Listener at localhost.localdomain/33511] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,45795,1686243472079' ***** 2023-06-08 16:58:59,951 INFO [Listener at localhost.localdomain/33511] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:58:59,951 INFO [RS:0;jenkins-hbase20:45795] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:58:59,951 INFO [RS:0;jenkins-hbase20:45795] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:58:59,951 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:58:59,951 INFO [RS:0;jenkins-hbase20:45795] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:58:59,951 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(3303): Received CLOSE for 66a0067d300bacd28279aeeefe7744ac 2023-06-08 16:58:59,952 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(3303): Received CLOSE for e9416fc1ff249b5555d59c33e3d746db 2023-06-08 16:58:59,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 66a0067d300bacd28279aeeefe7744ac, disabling compactions & flushes 2023-06-08 16:58:59,952 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(3303): Received CLOSE for f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:58:59,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:59,952 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:58:59,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:59,952 DEBUG [RS:0;jenkins-hbase20:45795] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x599f4c08 to 127.0.0.1:64082 2023-06-08 16:58:59,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. after waiting 0 ms 2023-06-08 16:58:59,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:58:59,952 DEBUG [RS:0;jenkins-hbase20:45795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:58:59,952 INFO [RS:0;jenkins-hbase20:45795] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:58:59,952 INFO [RS:0;jenkins-hbase20:45795] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:58:59,953 INFO [RS:0;jenkins-hbase20:45795] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:58:59,953 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:58:59,957 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-06-08 16:58:59,959 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1478): Online Regions={66a0067d300bacd28279aeeefe7744ac=TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac., 1588230740=hbase:meta,,1.1588230740, e9416fc1ff249b5555d59c33e3d746db=TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db., f60bec9f2c5268f9e3d3e619301ef857=hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857.} 2023-06-08 16:58:59,959 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:58:59,960 DEBUG [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1504): Waiting on 1588230740, 66a0067d300bacd28279aeeefe7744ac, e9416fc1ff249b5555d59c33e3d746db, f60bec9f2c5268f9e3d3e619301ef857 2023-06-08 16:58:59,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:58:59,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:58:59,963 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:58:59,964 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:58:59,976 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973->hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a-top, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3cabca768aba4d0ca2c03066dd09d99b, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/e97407cf9822470784c343a447ea4031, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/05cc7a0fa385496688d3954bf84b4704, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3d5e3d25e7c946028df31e27eca56b36, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/2c8e9ddd66c142b0ab82f8462bd303a9, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/0dc88df3697e4498b8baaaa47d12e65a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/cd3477ae3b4048bc8e4dc5166c1a7f4b, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3681a0770a1643c0a9a2ff2fd962439f, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/80df77c07fab433bbeb6117c940cf686, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/fbaf355f446244b4994877f937bbda8b, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/613ef7f7b1414253b6795c0fadc70ea9, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b0b963a298ef4d2ea60c95bb93ac9845, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/db6f70144ede4175919603d73c5c1f8a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/22371f579b114090b6d0018c2b9c6953, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b7b9497c1e454766b72564cd4ab8c027, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/97d490015dce4eb6bde5eb84b1e18a4a, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/60809af9115e443ba508ccb5ad736419, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/44acebb05d834d6eb5366b465ac75eb1, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/52a9f758afc94651877514a0f52c5db2, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/033a6af10cf241ac80f0d424760ed4ac, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/7719939cfa6d4688a568a307b1842a6a] to archive 2023-06-08 16:58:59,977 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 16:58:59,979 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:58:59,980 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3cabca768aba4d0ca2c03066dd09d99b to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3cabca768aba4d0ca2c03066dd09d99b 2023-06-08 16:58:59,980 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-06-08 16:58:59,981 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 16:58:59,982 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:58:59,982 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:58:59,982 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 16:58:59,982 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/e97407cf9822470784c343a447ea4031 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/e97407cf9822470784c343a447ea4031 2023-06-08 16:58:59,984 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/05cc7a0fa385496688d3954bf84b4704 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/05cc7a0fa385496688d3954bf84b4704 2023-06-08 16:58:59,985 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3d5e3d25e7c946028df31e27eca56b36 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3d5e3d25e7c946028df31e27eca56b36 2023-06-08 16:58:59,986 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/2c8e9ddd66c142b0ab82f8462bd303a9 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/2c8e9ddd66c142b0ab82f8462bd303a9 2023-06-08 16:58:59,987 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/0dc88df3697e4498b8baaaa47d12e65a to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/0dc88df3697e4498b8baaaa47d12e65a 2023-06-08 16:58:59,989 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/cd3477ae3b4048bc8e4dc5166c1a7f4b to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/cd3477ae3b4048bc8e4dc5166c1a7f4b 2023-06-08 16:58:59,990 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3681a0770a1643c0a9a2ff2fd962439f to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/3681a0770a1643c0a9a2ff2fd962439f 2023-06-08 16:58:59,992 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/80df77c07fab433bbeb6117c940cf686 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/80df77c07fab433bbeb6117c940cf686 2023-06-08 16:58:59,993 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/fbaf355f446244b4994877f937bbda8b to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/fbaf355f446244b4994877f937bbda8b 2023-06-08 16:58:59,995 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/613ef7f7b1414253b6795c0fadc70ea9 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/613ef7f7b1414253b6795c0fadc70ea9 2023-06-08 16:58:59,996 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b0b963a298ef4d2ea60c95bb93ac9845 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b0b963a298ef4d2ea60c95bb93ac9845 2023-06-08 16:58:59,998 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/db6f70144ede4175919603d73c5c1f8a to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/db6f70144ede4175919603d73c5c1f8a 2023-06-08 16:58:59,999 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/22371f579b114090b6d0018c2b9c6953 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/22371f579b114090b6d0018c2b9c6953 2023-06-08 16:59:00,001 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b7b9497c1e454766b72564cd4ab8c027 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/b7b9497c1e454766b72564cd4ab8c027 2023-06-08 16:59:00,002 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/97d490015dce4eb6bde5eb84b1e18a4a to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/97d490015dce4eb6bde5eb84b1e18a4a 2023-06-08 16:59:00,003 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/60809af9115e443ba508ccb5ad736419 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/60809af9115e443ba508ccb5ad736419 2023-06-08 16:59:00,004 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/44acebb05d834d6eb5366b465ac75eb1 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/44acebb05d834d6eb5366b465ac75eb1 2023-06-08 16:59:00,005 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/52a9f758afc94651877514a0f52c5db2 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/52a9f758afc94651877514a0f52c5db2 2023-06-08 16:59:00,006 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/033a6af10cf241ac80f0d424760ed4ac to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/033a6af10cf241ac80f0d424760ed4ac 2023-06-08 16:59:00,007 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/7719939cfa6d4688a568a307b1842a6a to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/info/7719939cfa6d4688a568a307b1842a6a 2023-06-08 16:59:00,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/66a0067d300bacd28279aeeefe7744ac/recovered.edits/333.seqid, newMaxSeqId=333, maxSeqId=85 2023-06-08 16:59:00,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:59:00,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 66a0067d300bacd28279aeeefe7744ac: 2023-06-08 16:59:00,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1686243495457.66a0067d300bacd28279aeeefe7744ac. 2023-06-08 16:59:00,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e9416fc1ff249b5555d59c33e3d746db, disabling compactions & flushes 2023-06-08 16:59:00,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:59:00,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:59:00,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. after waiting 0 ms 2023-06-08 16:59:00,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:59:00,015 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973->hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/3d4cf2b7551519f755e0a5ca6c209973/info/15b6a76e4c02401db0095b33fffb879a-bottom] to archive 2023-06-08 16:59:00,015 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-06-08 16:59:00,018 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973 to hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/archive/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/info/15b6a76e4c02401db0095b33fffb879a.3d4cf2b7551519f755e0a5ca6c209973 2023-06-08 16:59:00,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/default/TestLogRolling-testLogRolling/e9416fc1ff249b5555d59c33e3d746db/recovered.edits/90.seqid, newMaxSeqId=90, maxSeqId=85 2023-06-08 16:59:00,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:59:00,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e9416fc1ff249b5555d59c33e3d746db: 2023-06-08 16:59:00,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1686243495457.e9416fc1ff249b5555d59c33e3d746db. 2023-06-08 16:59:00,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f60bec9f2c5268f9e3d3e619301ef857, disabling compactions & flushes 2023-06-08 16:59:00,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:59:00,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:59:00,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. after waiting 0 ms 2023-06-08 16:59:00,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:59:00,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/data/hbase/namespace/f60bec9f2c5268f9e3d3e619301ef857/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 16:59:00,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:59:00,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f60bec9f2c5268f9e3d3e619301ef857: 2023-06-08 16:59:00,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686243472628.f60bec9f2c5268f9e3d3e619301ef857. 2023-06-08 16:59:00,161 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45795,1686243472079; all regions closed. 2023-06-08 16:59:00,163 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:59:00,176 DEBUG [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/oldWALs 2023-06-08 16:59:00,176 INFO [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C45795%2C1686243472079.meta:.meta(num 1686243472571) 2023-06-08 16:59:00,176 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/WALs/jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:59:00,183 DEBUG [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/oldWALs 2023-06-08 16:59:00,183 INFO [RS:0;jenkins-hbase20:45795] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C45795%2C1686243472079:(num 1686243539835) 2023-06-08 16:59:00,183 DEBUG [RS:0;jenkins-hbase20:45795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:00,183 INFO [RS:0;jenkins-hbase20:45795] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:59:00,184 INFO [RS:0;jenkins-hbase20:45795] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-06-08 16:59:00,184 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:59:00,185 INFO [RS:0;jenkins-hbase20:45795] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45795 2023-06-08 16:59:00,187 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45795,1686243472079 2023-06-08 16:59:00,187 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:59:00,187 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:59:00,188 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45795,1686243472079] 2023-06-08 16:59:00,188 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45795,1686243472079; numProcessing=1 2023-06-08 16:59:00,189 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45795,1686243472079 already deleted, retry=false 2023-06-08 16:59:00,189 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45795,1686243472079 expired; onlineServers=0 2023-06-08 16:59:00,189 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,46075,1686243472037' ***** 2023-06-08 16:59:00,189 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 16:59:00,189 DEBUG [M:0;jenkins-hbase20:46075] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ab8d39e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:59:00,189 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:59:00,189 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46075,1686243472037; all regions closed. 2023-06-08 16:59:00,189 DEBUG [M:0;jenkins-hbase20:46075] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:00,189 DEBUG [M:0;jenkins-hbase20:46075] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 16:59:00,189 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 16:59:00,189 DEBUG [M:0;jenkins-hbase20:46075] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 16:59:00,189 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243472206] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243472206,5,FailOnTimeoutGroup] 2023-06-08 16:59:00,190 INFO [M:0;jenkins-hbase20:46075] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 16:59:00,189 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243472206] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243472206,5,FailOnTimeoutGroup] 2023-06-08 16:59:00,190 INFO [M:0;jenkins-hbase20:46075] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 16:59:00,191 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 16:59:00,191 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:00,191 INFO [M:0;jenkins-hbase20:46075] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-08 16:59:00,191 DEBUG [M:0;jenkins-hbase20:46075] master.HMaster(1512): Stopping service threads 2023-06-08 16:59:00,191 INFO [M:0;jenkins-hbase20:46075] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 16:59:00,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:59:00,192 ERROR [M:0;jenkins-hbase20:46075] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-06-08 16:59:00,192 INFO [M:0;jenkins-hbase20:46075] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 16:59:00,192 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 16:59:00,192 DEBUG [M:0;jenkins-hbase20:46075] zookeeper.ZKUtil(398): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 16:59:00,192 WARN [M:0;jenkins-hbase20:46075] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 16:59:00,192 INFO [M:0;jenkins-hbase20:46075] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 16:59:00,193 INFO [M:0;jenkins-hbase20:46075] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 16:59:00,193 DEBUG [M:0;jenkins-hbase20:46075] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:59:00,193 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:00,193 DEBUG [M:0;jenkins-hbase20:46075] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:00,193 DEBUG [M:0;jenkins-hbase20:46075] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:59:00,193 DEBUG [M:0;jenkins-hbase20:46075] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:00,193 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-06-08 16:59:00,202 INFO [M:0;jenkins-hbase20:46075] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/af3ceb3ce4ab4823ac53b4933ee5270c 2023-06-08 16:59:00,207 INFO [M:0;jenkins-hbase20:46075] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for af3ceb3ce4ab4823ac53b4933ee5270c 2023-06-08 16:59:00,208 DEBUG [M:0;jenkins-hbase20:46075] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/af3ceb3ce4ab4823ac53b4933ee5270c as hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/af3ceb3ce4ab4823ac53b4933ee5270c 2023-06-08 16:59:00,212 INFO [M:0;jenkins-hbase20:46075] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for af3ceb3ce4ab4823ac53b4933ee5270c 2023-06-08 16:59:00,213 INFO [M:0;jenkins-hbase20:46075] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34495/user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/af3ceb3ce4ab4823ac53b4933ee5270c, entries=18, sequenceid=160, filesize=6.9 K 2023-06-08 16:59:00,213 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=160, compaction requested=false 2023-06-08 16:59:00,215 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:00,215 DEBUG [M:0;jenkins-hbase20:46075] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:59:00,215 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/19f4671e-1f82-7579-6320-d4dc758ddffa/MasterData/WALs/jenkins-hbase20.apache.org,46075,1686243472037 2023-06-08 16:59:00,218 INFO [M:0;jenkins-hbase20:46075] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 16:59:00,218 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:59:00,219 INFO [M:0;jenkins-hbase20:46075] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46075 2023-06-08 16:59:00,221 DEBUG [M:0;jenkins-hbase20:46075] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,46075,1686243472037 already deleted, retry=false 2023-06-08 16:59:00,288 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:00,288 INFO [RS:0;jenkins-hbase20:45795] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45795,1686243472079; zookeeper connection closed. 2023-06-08 16:59:00,288 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): regionserver:45795-0x101cba78abe0001, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:00,289 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1208d940] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1208d940 2023-06-08 16:59:00,290 INFO [Listener at localhost.localdomain/33511] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 16:59:00,348 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:59:00,388 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:00,388 INFO [M:0;jenkins-hbase20:46075] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46075,1686243472037; zookeeper connection closed. 2023-06-08 16:59:00,388 DEBUG [Listener at localhost.localdomain/33511-EventThread] zookeeper.ZKWatcher(600): master:46075-0x101cba78abe0000, quorum=127.0.0.1:64082, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:00,390 WARN [Listener at localhost.localdomain/33511] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:59:00,394 INFO [Listener at localhost.localdomain/33511] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:59:00,504 WARN [BP-1590739881-148.251.75.209-1686243471526 heartbeating to localhost.localdomain/127.0.0.1:34495] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:59:00,504 WARN [BP-1590739881-148.251.75.209-1686243471526 heartbeating to localhost.localdomain/127.0.0.1:34495] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1590739881-148.251.75.209-1686243471526 (Datanode Uuid 229c7354-8dfc-4d51-9ea2-5939476ed88a) service to localhost.localdomain/127.0.0.1:34495 2023-06-08 16:59:00,505 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/dfs/data/data3/current/BP-1590739881-148.251.75.209-1686243471526] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:00,506 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/dfs/data/data4/current/BP-1590739881-148.251.75.209-1686243471526] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:00,509 WARN [Listener at localhost.localdomain/33511] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:59:00,515 INFO [Listener at localhost.localdomain/33511] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:59:00,622 WARN [BP-1590739881-148.251.75.209-1686243471526 heartbeating to localhost.localdomain/127.0.0.1:34495] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:59:00,622 WARN [BP-1590739881-148.251.75.209-1686243471526 heartbeating to localhost.localdomain/127.0.0.1:34495] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1590739881-148.251.75.209-1686243471526 (Datanode Uuid 932545c7-073e-4d40-a6b5-66b0b0fdeba2) service to localhost.localdomain/127.0.0.1:34495 2023-06-08 16:59:00,623 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/dfs/data/data1/current/BP-1590739881-148.251.75.209-1686243471526] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:00,623 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/cluster_bdc0776d-3fb6-7fd9-07f3-397efd8b688a/dfs/data/data2/current/BP-1590739881-148.251.75.209-1686243471526] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:00,639 INFO [Listener at localhost.localdomain/33511] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 16:59:00,760 INFO [Listener at localhost.localdomain/33511] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 16:59:00,790 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 16:59:00,802 INFO [Listener at localhost.localdomain/33511] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 95) - Thread LEAK? -, OpenFileDescriptor=538 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=61 (was 30) - SystemLoadAverage LEAK? -, ProcessCount=182 (was 184), AvailableMemoryMB=1629 (was 1915) 2023-06-08 16:59:00,811 INFO [Listener at localhost.localdomain/33511] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=538, MaxFileDescriptor=60000, SystemLoadAverage=61, ProcessCount=182, AvailableMemoryMB=1629 2023-06-08 16:59:00,811 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-06-08 16:59:00,811 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/hadoop.log.dir so I do NOT create it in target/test-data/38a08fcb-64a6-e611-9893-d33447095f80 2023-06-08 16:59:00,811 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3ede3a4f-a7c6-730d-9b2a-ce5df78bbf82/hadoop.tmp.dir so I do NOT create it in target/test-data/38a08fcb-64a6-e611-9893-d33447095f80 2023-06-08 16:59:00,811 INFO [Listener at localhost.localdomain/33511] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913, deleteOnExit=true 2023-06-08 16:59:00,811 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/test.cache.data in system properties and HBase conf 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/hadoop.tmp.dir in system properties and HBase conf 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/hadoop.log.dir in system properties and HBase conf 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/mapreduce.cluster.local.dir in system properties and HBase conf 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-06-08 16:59:00,812 DEBUG [Listener at localhost.localdomain/33511] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:59:00,812 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/nfs.dump.dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/java.io.tmpdir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/dfs.journalnode.edits.dir in system properties and HBase conf 2023-06-08 16:59:00,813 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-06-08 16:59:00,814 INFO [Listener at localhost.localdomain/33511] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-06-08 16:59:00,815 WARN [Listener at localhost.localdomain/33511] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:59:00,817 WARN [Listener at localhost.localdomain/33511] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:59:00,817 WARN [Listener at localhost.localdomain/33511] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:59:00,838 WARN [Listener at localhost.localdomain/33511] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:59:00,840 INFO [Listener at localhost.localdomain/33511] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:59:00,844 INFO [Listener at localhost.localdomain/33511] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/java.io.tmpdir/Jetty_localhost_localdomain_43963_hdfs____v2onn5/webapp 2023-06-08 16:59:00,915 INFO [Listener at localhost.localdomain/33511] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43963 2023-06-08 16:59:00,916 WARN [Listener at localhost.localdomain/33511] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-06-08 16:59:00,917 WARN [Listener at localhost.localdomain/33511] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-06-08 16:59:00,918 WARN [Listener at localhost.localdomain/33511] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-06-08 16:59:00,942 WARN [Listener at localhost.localdomain/34001] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:59:00,955 WARN [Listener at localhost.localdomain/34001] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:59:00,956 WARN [Listener at localhost.localdomain/34001] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:59:00,957 INFO [Listener at localhost.localdomain/34001] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:59:00,962 INFO [Listener at localhost.localdomain/34001] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/java.io.tmpdir/Jetty_localhost_33907_datanode____dsf98n/webapp 2023-06-08 16:59:01,032 INFO [Listener at localhost.localdomain/34001] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33907 2023-06-08 16:59:01,037 WARN [Listener at localhost.localdomain/38133] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:59:01,047 WARN [Listener at localhost.localdomain/38133] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-06-08 16:59:01,049 WARN [Listener at localhost.localdomain/38133] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-06-08 16:59:01,050 INFO [Listener at localhost.localdomain/38133] log.Slf4jLog(67): jetty-6.1.26 2023-06-08 16:59:01,054 INFO [Listener at localhost.localdomain/38133] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/java.io.tmpdir/Jetty_localhost_38769_datanode____3we3a4/webapp 2023-06-08 16:59:01,100 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbbbbf6416fc92851: Processing first storage report for DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440 from datanode 597dd3be-64aa-40bb-93e1-9f2692a22759 2023-06-08 16:59:01,100 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbbbbf6416fc92851: from storage DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440 node DatanodeRegistration(127.0.0.1:46445, datanodeUuid=597dd3be-64aa-40bb-93e1-9f2692a22759, infoPort=35777, infoSecurePort=0, ipcPort=38133, storageInfo=lv=-57;cid=testClusterID;nsid=253982466;c=1686243540818), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:59:01,100 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbbbbf6416fc92851: Processing first storage report for DS-36135718-e764-44c1-b4ba-449ff33afa8b from datanode 597dd3be-64aa-40bb-93e1-9f2692a22759 2023-06-08 16:59:01,101 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbbbbf6416fc92851: from storage DS-36135718-e764-44c1-b4ba-449ff33afa8b node DatanodeRegistration(127.0.0.1:46445, datanodeUuid=597dd3be-64aa-40bb-93e1-9f2692a22759, infoPort=35777, infoSecurePort=0, ipcPort=38133, storageInfo=lv=-57;cid=testClusterID;nsid=253982466;c=1686243540818), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:59:01,134 INFO [Listener at localhost.localdomain/38133] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38769 2023-06-08 16:59:01,142 WARN [Listener at localhost.localdomain/36349] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-06-08 16:59:01,190 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdec6a3f770a095c9: Processing first storage report for DS-feec76d2-635a-40a3-a858-8d7f24fd3575 from datanode 13181876-b43c-4d9f-a965-32bbf630d91d 2023-06-08 16:59:01,190 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdec6a3f770a095c9: from storage DS-feec76d2-635a-40a3-a858-8d7f24fd3575 node DatanodeRegistration(127.0.0.1:37267, datanodeUuid=13181876-b43c-4d9f-a965-32bbf630d91d, infoPort=41811, infoSecurePort=0, ipcPort=36349, storageInfo=lv=-57;cid=testClusterID;nsid=253982466;c=1686243540818), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:59:01,190 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdec6a3f770a095c9: Processing first storage report for DS-1aa58d69-7962-41fa-8b5b-f10c6ba4e3c6 from datanode 13181876-b43c-4d9f-a965-32bbf630d91d 2023-06-08 16:59:01,190 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdec6a3f770a095c9: from storage DS-1aa58d69-7962-41fa-8b5b-f10c6ba4e3c6 node DatanodeRegistration(127.0.0.1:37267, datanodeUuid=13181876-b43c-4d9f-a965-32bbf630d91d, infoPort=41811, infoSecurePort=0, ipcPort=36349, storageInfo=lv=-57;cid=testClusterID;nsid=253982466;c=1686243540818), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-06-08 16:59:01,250 DEBUG [Listener at localhost.localdomain/36349] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80 2023-06-08 16:59:01,252 INFO [Listener at localhost.localdomain/36349] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/zookeeper_0, clientPort=58855, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-06-08 16:59:01,254 INFO [Listener at localhost.localdomain/36349] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58855 2023-06-08 16:59:01,254 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,255 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,273 INFO [Listener at localhost.localdomain/36349] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c with version=8 2023-06-08 16:59:01,274 INFO [Listener at localhost.localdomain/36349] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:33111/user/jenkins/test-data/a7be4fbb-0e30-a3d7-a672-01c8381eefeb/hbase-staging 2023-06-08 16:59:01,277 INFO [Listener at localhost.localdomain/36349] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:59:01,277 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:59:01,277 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:59:01,278 INFO [Listener at localhost.localdomain/36349] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:59:01,278 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:59:01,278 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:59:01,278 INFO [Listener at localhost.localdomain/36349] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:59:01,280 INFO [Listener at localhost.localdomain/36349] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46821 2023-06-08 16:59:01,280 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,281 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,282 INFO [Listener at localhost.localdomain/36349] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46821 connecting to ZooKeeper ensemble=127.0.0.1:58855 2023-06-08 16:59:01,287 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:468210x0, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:59:01,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46821-0x101cba8992f0000 connected 2023-06-08 16:59:01,297 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:59:01,298 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:59:01,298 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:59:01,298 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46821 2023-06-08 16:59:01,299 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46821 2023-06-08 16:59:01,299 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46821 2023-06-08 16:59:01,300 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46821 2023-06-08 16:59:01,300 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46821 2023-06-08 16:59:01,300 INFO [Listener at localhost.localdomain/36349] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c, hbase.cluster.distributed=false 2023-06-08 16:59:01,315 INFO [Listener at localhost.localdomain/36349] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-06-08 16:59:01,315 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:59:01,315 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-06-08 16:59:01,315 INFO [Listener at localhost.localdomain/36349] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-06-08 16:59:01,315 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-06-08 16:59:01,315 INFO [Listener at localhost.localdomain/36349] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-06-08 16:59:01,316 INFO [Listener at localhost.localdomain/36349] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-06-08 16:59:01,318 INFO [Listener at localhost.localdomain/36349] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38189 2023-06-08 16:59:01,318 INFO [Listener at localhost.localdomain/36349] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-06-08 16:59:01,320 DEBUG [Listener at localhost.localdomain/36349] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-06-08 16:59:01,320 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,321 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,322 INFO [Listener at localhost.localdomain/36349] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38189 connecting to ZooKeeper ensemble=127.0.0.1:58855 2023-06-08 16:59:01,324 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:381890x0, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-06-08 16:59:01,325 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ZKUtil(164): regionserver:381890x0, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:59:01,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38189-0x101cba8992f0001 connected 2023-06-08 16:59:01,326 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ZKUtil(164): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:59:01,326 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ZKUtil(164): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-06-08 16:59:01,327 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38189 2023-06-08 16:59:01,327 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38189 2023-06-08 16:59:01,327 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38189 2023-06-08 16:59:01,327 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38189 2023-06-08 16:59:01,327 DEBUG [Listener at localhost.localdomain/36349] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38189 2023-06-08 16:59:01,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,329 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:59:01,330 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,330 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:59:01,330 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-06-08 16:59:01,331 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,331 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:59:01,332 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,46821,1686243541276 from backup master directory 2023-06-08 16:59:01,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-06-08 16:59:01,333 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,333 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:59:01,333 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,333 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-06-08 16:59:01,346 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/hbase.id with ID: f5c0c759-8802-4491-8bf5-20714fd81327 2023-06-08 16:59:01,356 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:01,358 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,367 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x15c75458 to 127.0.0.1:58855 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:59:01,372 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@105dd3f6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:59:01,372 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-06-08 16:59:01,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-06-08 16:59:01,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:59:01,376 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store-tmp 2023-06-08 16:59:01,387 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:59:01,387 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:59:01,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:01,387 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:01,387 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:59:01,387 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:01,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:01,387 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:59:01,388 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/WALs/jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,392 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46821%2C1686243541276, suffix=, logDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/WALs/jenkins-hbase20.apache.org,46821,1686243541276, archiveDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/oldWALs, maxLogs=10 2023-06-08 16:59:01,400 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/WALs/jenkins-hbase20.apache.org,46821,1686243541276/jenkins-hbase20.apache.org%2C46821%2C1686243541276.1686243541393 2023-06-08 16:59:01,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46445,DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440,DISK], DatanodeInfoWithStorage[127.0.0.1:37267,DS-feec76d2-635a-40a3-a858-8d7f24fd3575,DISK]] 2023-06-08 16:59:01,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:59:01,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:59:01,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:59:01,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:59:01,405 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:59:01,408 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-06-08 16:59:01,409 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-06-08 16:59:01,410 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,411 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:59:01,412 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:59:01,416 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-06-08 16:59:01,419 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:59:01,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=875607, jitterRate=0.1133931428194046}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:59:01,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:59:01,421 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-06-08 16:59:01,422 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-06-08 16:59:01,422 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-06-08 16:59:01,422 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-06-08 16:59:01,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-06-08 16:59:01,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-06-08 16:59:01,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-06-08 16:59:01,424 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-06-08 16:59:01,425 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-06-08 16:59:01,435 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-06-08 16:59:01,435 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-06-08 16:59:01,436 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-06-08 16:59:01,436 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-06-08 16:59:01,436 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-06-08 16:59:01,437 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-06-08 16:59:01,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-06-08 16:59:01,439 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-06-08 16:59:01,439 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:59:01,439 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-06-08 16:59:01,439 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,46821,1686243541276, sessionid=0x101cba8992f0000, setting cluster-up flag (Was=false) 2023-06-08 16:59:01,442 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-06-08 16:59:01,446 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,447 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,450 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-06-08 16:59:01,450 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:01,451 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/.hbase-snapshot/.tmp 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:59:01,453 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,455 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1686243571455 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-06-08 16:59:01,456 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-06-08 16:59:01,457 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-06-08 16:59:01,457 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-06-08 16:59:01,457 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243541457,5,FailOnTimeoutGroup] 2023-06-08 16:59:01,457 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243541457,5,FailOnTimeoutGroup] 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,458 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:59:01,466 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:59:01,467 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-06-08 16:59:01,467 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c 2023-06-08 16:59:01,474 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:59:01,475 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:59:01,477 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/info 2023-06-08 16:59:01,477 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:59:01,477 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,478 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:59:01,479 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:59:01,479 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:59:01,480 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,480 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:59:01,481 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/table 2023-06-08 16:59:01,481 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:59:01,481 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,482 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740 2023-06-08 16:59:01,482 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740 2023-06-08 16:59:01,484 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:59:01,485 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:59:01,487 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:59:01,487 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=763765, jitterRate=-0.028822720050811768}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:59:01,487 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:59:01,487 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:59:01,487 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:59:01,487 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:59:01,487 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:59:01,487 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:59:01,488 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:59:01,488 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:59:01,488 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-06-08 16:59:01,488 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-06-08 16:59:01,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-06-08 16:59:01,490 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-06-08 16:59:01,491 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-06-08 16:59:01,530 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(951): ClusterId : f5c0c759-8802-4491-8bf5-20714fd81327 2023-06-08 16:59:01,532 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-06-08 16:59:01,536 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-06-08 16:59:01,536 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-06-08 16:59:01,539 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-06-08 16:59:01,540 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ReadOnlyZKClient(139): Connect 0x18bd88da to 127.0.0.1:58855 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:59:01,547 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@167a65bd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:59:01,547 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1245e3aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:59:01,558 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:38189 2023-06-08 16:59:01,558 INFO [RS:0;jenkins-hbase20:38189] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-06-08 16:59:01,558 INFO [RS:0;jenkins-hbase20:38189] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-06-08 16:59:01,558 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1022): About to register with Master. 2023-06-08 16:59:01,558 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,46821,1686243541276 with isa=jenkins-hbase20.apache.org/148.251.75.209:38189, startcode=1686243541315 2023-06-08 16:59:01,559 DEBUG [RS:0;jenkins-hbase20:38189] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-06-08 16:59:01,562 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58381, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-06-08 16:59:01,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46821] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,564 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c 2023-06-08 16:59:01,564 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34001 2023-06-08 16:59:01,564 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-06-08 16:59:01,565 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:59:01,566 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ZKUtil(162): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,566 WARN [RS:0;jenkins-hbase20:38189] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-06-08 16:59:01,566 INFO [RS:0;jenkins-hbase20:38189] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:59:01,566 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38189,1686243541315] 2023-06-08 16:59:01,566 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,570 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ZKUtil(162): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,571 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-06-08 16:59:01,571 INFO [RS:0;jenkins-hbase20:38189] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-06-08 16:59:01,572 INFO [RS:0;jenkins-hbase20:38189] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-06-08 16:59:01,572 INFO [RS:0;jenkins-hbase20:38189] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-06-08 16:59:01,572 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,572 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-06-08 16:59:01,573 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-06-08 16:59:01,573 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,574 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,574 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,574 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-06-08 16:59:01,574 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,574 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,574 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,585 INFO [RS:0;jenkins-hbase20:38189] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-06-08 16:59:01,585 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38189,1686243541315-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,594 INFO [RS:0;jenkins-hbase20:38189] regionserver.Replication(203): jenkins-hbase20.apache.org,38189,1686243541315 started 2023-06-08 16:59:01,594 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38189,1686243541315, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38189, sessionid=0x101cba8992f0001 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38189,1686243541315' 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38189,1686243541315' 2023-06-08 16:59:01,595 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-06-08 16:59:01,596 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-06-08 16:59:01,596 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-06-08 16:59:01,596 INFO [RS:0;jenkins-hbase20:38189] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-06-08 16:59:01,596 INFO [RS:0;jenkins-hbase20:38189] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-06-08 16:59:01,641 DEBUG [jenkins-hbase20:46821] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-06-08 16:59:01,643 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38189,1686243541315, state=OPENING 2023-06-08 16:59:01,644 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-06-08 16:59:01,645 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:01,646 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38189,1686243541315}] 2023-06-08 16:59:01,646 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:59:01,701 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38189%2C1686243541315, suffix=, logDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315, archiveDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs, maxLogs=32 2023-06-08 16:59:01,713 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315/jenkins-hbase20.apache.org%2C38189%2C1686243541315.1686243541702 2023-06-08 16:59:01,714 DEBUG [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46445,DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440,DISK], DatanodeInfoWithStorage[127.0.0.1:37267,DS-feec76d2-635a-40a3-a858-8d7f24fd3575,DISK]] 2023-06-08 16:59:01,803 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:01,804 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-06-08 16:59:01,809 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-06-08 16:59:01,814 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-06-08 16:59:01,814 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:59:01,816 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38189%2C1686243541315.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315, archiveDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs, maxLogs=32 2023-06-08 16:59:01,821 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315/jenkins-hbase20.apache.org%2C38189%2C1686243541315.meta.1686243541816.meta 2023-06-08 16:59:01,821 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37267,DS-feec76d2-635a-40a3-a858-8d7f24fd3575,DISK], DatanodeInfoWithStorage[127.0.0.1:46445,DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440,DISK]] 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-06-08 16:59:01,822 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-06-08 16:59:01,822 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-06-08 16:59:01,823 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-06-08 16:59:01,824 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/info 2023-06-08 16:59:01,824 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/info 2023-06-08 16:59:01,824 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-06-08 16:59:01,825 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,825 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-06-08 16:59:01,825 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:59:01,825 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/rep_barrier 2023-06-08 16:59:01,826 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-06-08 16:59:01,826 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,826 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-06-08 16:59:01,827 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/table 2023-06-08 16:59:01,827 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/table 2023-06-08 16:59:01,827 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-06-08 16:59:01,828 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:01,828 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740 2023-06-08 16:59:01,829 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740 2023-06-08 16:59:01,832 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-06-08 16:59:01,833 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-06-08 16:59:01,834 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=735158, jitterRate=-0.06519901752471924}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-06-08 16:59:01,834 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-06-08 16:59:01,837 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1686243541803 2023-06-08 16:59:01,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-06-08 16:59:01,841 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-06-08 16:59:01,842 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38189,1686243541315, state=OPEN 2023-06-08 16:59:01,843 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-06-08 16:59:01,843 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-06-08 16:59:01,846 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-06-08 16:59:01,846 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38189,1686243541315 in 197 msec 2023-06-08 16:59:01,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-06-08 16:59:01,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 358 msec 2023-06-08 16:59:01,850 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 397 msec 2023-06-08 16:59:01,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1686243541850, completionTime=-1 2023-06-08 16:59:01,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-06-08 16:59:01,850 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-06-08 16:59:01,853 DEBUG [hconnection-0x7b52b08f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:59:01,855 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:59:01,857 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-06-08 16:59:01,857 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1686243601857 2023-06-08 16:59:01,857 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1686243661857 2023-06-08 16:59:01,857 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-06-08 16:59:01,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46821,1686243541276-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46821,1686243541276-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46821,1686243541276-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:46821, period=300000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-06-08 16:59:01,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-06-08 16:59:01,868 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-06-08 16:59:01,869 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-06-08 16:59:01,869 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-06-08 16:59:01,870 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-06-08 16:59:01,871 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-06-08 16:59:01,873 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/.tmp/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:01,873 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/.tmp/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a empty. 2023-06-08 16:59:01,874 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/.tmp/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:01,874 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-06-08 16:59:01,881 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-06-08 16:59:01,882 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c41116c8718e30f400131f9ee41eb57a, NAME => 'hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/.tmp 2023-06-08 16:59:01,889 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:59:01,889 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c41116c8718e30f400131f9ee41eb57a, disabling compactions & flushes 2023-06-08 16:59:01,889 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:01,889 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:01,889 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. after waiting 0 ms 2023-06-08 16:59:01,889 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:01,889 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:01,889 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c41116c8718e30f400131f9ee41eb57a: 2023-06-08 16:59:01,891 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-06-08 16:59:01,892 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243541892"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1686243541892"}]},"ts":"1686243541892"} 2023-06-08 16:59:01,894 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-06-08 16:59:01,895 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-06-08 16:59:01,895 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243541895"}]},"ts":"1686243541895"} 2023-06-08 16:59:01,896 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-06-08 16:59:01,901 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c41116c8718e30f400131f9ee41eb57a, ASSIGN}] 2023-06-08 16:59:01,903 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c41116c8718e30f400131f9ee41eb57a, ASSIGN 2023-06-08 16:59:01,903 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c41116c8718e30f400131f9ee41eb57a, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38189,1686243541315; forceNewPlan=false, retain=false 2023-06-08 16:59:02,055 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c41116c8718e30f400131f9ee41eb57a, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:02,056 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243542055"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1686243542055"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1686243542055"}]},"ts":"1686243542055"} 2023-06-08 16:59:02,059 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c41116c8718e30f400131f9ee41eb57a, server=jenkins-hbase20.apache.org,38189,1686243541315}] 2023-06-08 16:59:02,223 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,224 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c41116c8718e30f400131f9ee41eb57a, NAME => 'hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.', STARTKEY => '', ENDKEY => ''} 2023-06-08 16:59:02,224 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-06-08 16:59:02,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,228 INFO [StoreOpener-c41116c8718e30f400131f9ee41eb57a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,230 DEBUG [StoreOpener-c41116c8718e30f400131f9ee41eb57a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/info 2023-06-08 16:59:02,230 DEBUG [StoreOpener-c41116c8718e30f400131f9ee41eb57a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/info 2023-06-08 16:59:02,230 INFO [StoreOpener-c41116c8718e30f400131f9ee41eb57a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c41116c8718e30f400131f9ee41eb57a columnFamilyName info 2023-06-08 16:59:02,231 INFO [StoreOpener-c41116c8718e30f400131f9ee41eb57a-1] regionserver.HStore(310): Store=c41116c8718e30f400131f9ee41eb57a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-06-08 16:59:02,231 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,232 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,236 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-06-08 16:59:02,237 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c41116c8718e30f400131f9ee41eb57a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=877672, jitterRate=0.11601801216602325}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-06-08 16:59:02,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c41116c8718e30f400131f9ee41eb57a: 2023-06-08 16:59:02,238 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a., pid=6, masterSystemTime=1686243542214 2023-06-08 16:59:02,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,240 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,241 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c41116c8718e30f400131f9ee41eb57a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:02,241 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1686243542241"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1686243542241"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1686243542241"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1686243542241"}]},"ts":"1686243542241"} 2023-06-08 16:59:02,244 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-06-08 16:59:02,244 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c41116c8718e30f400131f9ee41eb57a, server=jenkins-hbase20.apache.org,38189,1686243541315 in 183 msec 2023-06-08 16:59:02,245 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-06-08 16:59:02,245 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c41116c8718e30f400131f9ee41eb57a, ASSIGN in 344 msec 2023-06-08 16:59:02,246 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-06-08 16:59:02,246 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1686243542246"}]},"ts":"1686243542246"} 2023-06-08 16:59:02,247 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-06-08 16:59:02,249 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-06-08 16:59:02,250 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 381 msec 2023-06-08 16:59:02,270 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-06-08 16:59:02,271 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:59:02,271 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:02,275 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-06-08 16:59:02,283 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:59:02,286 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-06-08 16:59:02,297 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-06-08 16:59:02,304 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-06-08 16:59:02,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-06-08 16:59:02,321 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-06-08 16:59:02,323 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-06-08 16:59:02,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.990sec 2023-06-08 16:59:02,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-06-08 16:59:02,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-06-08 16:59:02,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-06-08 16:59:02,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46821,1686243541276-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-06-08 16:59:02,323 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46821,1686243541276-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-06-08 16:59:02,325 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-06-08 16:59:02,330 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ReadOnlyZKClient(139): Connect 0x6efbe549 to 127.0.0.1:58855 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-06-08 16:59:02,337 DEBUG [Listener at localhost.localdomain/36349] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6de79672, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-06-08 16:59:02,339 DEBUG [hconnection-0x74bca290-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-06-08 16:59:02,340 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-06-08 16:59:02,342 INFO [Listener at localhost.localdomain/36349] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:02,342 INFO [Listener at localhost.localdomain/36349] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-06-08 16:59:02,347 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-06-08 16:59:02,347 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:02,348 INFO [Listener at localhost.localdomain/36349] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-06-08 16:59:02,348 INFO [Listener at localhost.localdomain/36349] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-06-08 16:59:02,349 INFO [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs, maxLogs=32 2023-06-08 16:59:02,354 INFO [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1/test.com%2C8080%2C1.1686243542350 2023-06-08 16:59:02,354 DEBUG [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46445,DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440,DISK], DatanodeInfoWithStorage[127.0.0.1:37267,DS-feec76d2-635a-40a3-a858-8d7f24fd3575,DISK]] 2023-06-08 16:59:02,364 INFO [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1/test.com%2C8080%2C1.1686243542350 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1/test.com%2C8080%2C1.1686243542355 2023-06-08 16:59:02,364 DEBUG [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37267,DS-feec76d2-635a-40a3-a858-8d7f24fd3575,DISK], DatanodeInfoWithStorage[127.0.0.1:46445,DS-4a9d9b90-de28-4bc4-9876-3857b7bc2440,DISK]] 2023-06-08 16:59:02,364 DEBUG [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1/test.com%2C8080%2C1.1686243542350 is not closed yet, will try archiving it next time 2023-06-08 16:59:02,365 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1 2023-06-08 16:59:02,374 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/test.com,8080,1/test.com%2C8080%2C1.1686243542350 to hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs/test.com%2C8080%2C1.1686243542350 2023-06-08 16:59:02,376 DEBUG [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs 2023-06-08 16:59:02,376 INFO [Listener at localhost.localdomain/36349] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1686243542355) 2023-06-08 16:59:02,376 INFO [Listener at localhost.localdomain/36349] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-06-08 16:59:02,376 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6efbe549 to 127.0.0.1:58855 2023-06-08 16:59:02,376 DEBUG [Listener at localhost.localdomain/36349] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:02,377 DEBUG [Listener at localhost.localdomain/36349] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-06-08 16:59:02,378 DEBUG [Listener at localhost.localdomain/36349] util.JVMClusterUtil(257): Found active master hash=1943549406, stopped=false 2023-06-08 16:59:02,378 INFO [Listener at localhost.localdomain/36349] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:02,380 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:59:02,380 INFO [Listener at localhost.localdomain/36349] procedure2.ProcedureExecutor(629): Stopping 2023-06-08 16:59:02,380 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-06-08 16:59:02,381 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:02,382 DEBUG [Listener at localhost.localdomain/36349] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x15c75458 to 127.0.0.1:58855 2023-06-08 16:59:02,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:59:02,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-06-08 16:59:02,382 DEBUG [Listener at localhost.localdomain/36349] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:02,382 INFO [Listener at localhost.localdomain/36349] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38189,1686243541315' ***** 2023-06-08 16:59:02,382 INFO [Listener at localhost.localdomain/36349] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-06-08 16:59:02,383 INFO [RS:0;jenkins-hbase20:38189] regionserver.HeapMemoryManager(220): Stopping 2023-06-08 16:59:02,383 INFO [RS:0;jenkins-hbase20:38189] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-06-08 16:59:02,383 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-06-08 16:59:02,383 INFO [RS:0;jenkins-hbase20:38189] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-06-08 16:59:02,384 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(3303): Received CLOSE for c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,384 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:02,384 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18bd88da to 127.0.0.1:58855 2023-06-08 16:59:02,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c41116c8718e30f400131f9ee41eb57a, disabling compactions & flushes 2023-06-08 16:59:02,384 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:02,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,384 INFO [RS:0;jenkins-hbase20:38189] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-06-08 16:59:02,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,384 INFO [RS:0;jenkins-hbase20:38189] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-06-08 16:59:02,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. after waiting 0 ms 2023-06-08 16:59:02,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c41116c8718e30f400131f9ee41eb57a 1/1 column families, dataSize=78 B heapSize=488 B 2023-06-08 16:59:02,385 INFO [RS:0;jenkins-hbase20:38189] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-06-08 16:59:02,385 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-06-08 16:59:02,385 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-06-08 16:59:02,385 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, c41116c8718e30f400131f9ee41eb57a=hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a.} 2023-06-08 16:59:02,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-06-08 16:59:02,385 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1504): Waiting on 1588230740, c41116c8718e30f400131f9ee41eb57a 2023-06-08 16:59:02,386 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-06-08 16:59:02,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-06-08 16:59:02,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-06-08 16:59:02,386 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-06-08 16:59:02,386 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-06-08 16:59:02,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/.tmp/info/551412227d70495f818ba2b9a572a50e 2023-06-08 16:59:02,404 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/.tmp/info/38efd231cee94a85a393f911d3d62a7b 2023-06-08 16:59:02,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/.tmp/info/551412227d70495f818ba2b9a572a50e as hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/info/551412227d70495f818ba2b9a572a50e 2023-06-08 16:59:02,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/info/551412227d70495f818ba2b9a572a50e, entries=2, sequenceid=6, filesize=4.8 K 2023-06-08 16:59:02,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c41116c8718e30f400131f9ee41eb57a in 33ms, sequenceid=6, compaction requested=false 2023-06-08 16:59:02,423 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/.tmp/table/3e5194178b0c4a7eba36f806dcc3c29e 2023-06-08 16:59:02,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/namespace/c41116c8718e30f400131f9ee41eb57a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-06-08 16:59:02,426 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c41116c8718e30f400131f9ee41eb57a: 2023-06-08 16:59:02,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1686243541868.c41116c8718e30f400131f9ee41eb57a. 2023-06-08 16:59:02,429 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/.tmp/info/38efd231cee94a85a393f911d3d62a7b as hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/info/38efd231cee94a85a393f911d3d62a7b 2023-06-08 16:59:02,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/info/38efd231cee94a85a393f911d3d62a7b, entries=10, sequenceid=9, filesize=5.9 K 2023-06-08 16:59:02,434 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/.tmp/table/3e5194178b0c4a7eba36f806dcc3c29e as hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/table/3e5194178b0c4a7eba36f806dcc3c29e 2023-06-08 16:59:02,439 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/table/3e5194178b0c4a7eba36f806dcc3c29e, entries=2, sequenceid=9, filesize=4.7 K 2023-06-08 16:59:02,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 54ms, sequenceid=9, compaction requested=false 2023-06-08 16:59:02,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-06-08 16:59:02,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-06-08 16:59:02,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-06-08 16:59:02,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-06-08 16:59:02,449 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-06-08 16:59:02,575 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-06-08 16:59:02,575 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-06-08 16:59:02,586 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38189,1686243541315; all regions closed. 2023-06-08 16:59:02,587 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:02,594 DEBUG [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs 2023-06-08 16:59:02,594 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38189%2C1686243541315.meta:.meta(num 1686243541816) 2023-06-08 16:59:02,595 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/WALs/jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:02,599 DEBUG [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/oldWALs 2023-06-08 16:59:02,599 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38189%2C1686243541315:(num 1686243541702) 2023-06-08 16:59:02,599 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:02,599 INFO [RS:0;jenkins-hbase20:38189] regionserver.LeaseManager(133): Closed leases 2023-06-08 16:59:02,600 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-06-08 16:59:02,600 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:59:02,600 INFO [RS:0;jenkins-hbase20:38189] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38189 2023-06-08 16:59:02,602 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:59:02,602 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38189,1686243541315 2023-06-08 16:59:02,602 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-06-08 16:59:02,603 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38189,1686243541315] 2023-06-08 16:59:02,603 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38189,1686243541315; numProcessing=1 2023-06-08 16:59:02,604 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38189,1686243541315 already deleted, retry=false 2023-06-08 16:59:02,604 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38189,1686243541315 expired; onlineServers=0 2023-06-08 16:59:02,604 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,46821,1686243541276' ***** 2023-06-08 16:59:02,604 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-06-08 16:59:02,604 DEBUG [M:0;jenkins-hbase20:46821] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77e7af7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-06-08 16:59:02,604 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:02,604 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46821,1686243541276; all regions closed. 2023-06-08 16:59:02,604 DEBUG [M:0;jenkins-hbase20:46821] ipc.AbstractRpcClient(494): Stopping rpc client 2023-06-08 16:59:02,604 DEBUG [M:0;jenkins-hbase20:46821] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-06-08 16:59:02,604 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-06-08 16:59:02,604 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243541457] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1686243541457,5,FailOnTimeoutGroup] 2023-06-08 16:59:02,604 DEBUG [M:0;jenkins-hbase20:46821] cleaner.HFileCleaner(317): Stopping file delete threads 2023-06-08 16:59:02,604 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243541457] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1686243541457,5,FailOnTimeoutGroup] 2023-06-08 16:59:02,605 INFO [M:0;jenkins-hbase20:46821] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-06-08 16:59:02,605 INFO [M:0;jenkins-hbase20:46821] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-06-08 16:59:02,605 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-06-08 16:59:02,605 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-06-08 16:59:02,606 INFO [M:0;jenkins-hbase20:46821] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-06-08 16:59:02,606 DEBUG [M:0;jenkins-hbase20:46821] master.HMaster(1512): Stopping service threads 2023-06-08 16:59:02,606 INFO [M:0;jenkins-hbase20:46821] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-06-08 16:59:02,606 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-06-08 16:59:02,606 ERROR [M:0;jenkins-hbase20:46821] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-06-08 16:59:02,606 INFO [M:0;jenkins-hbase20:46821] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-06-08 16:59:02,606 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-06-08 16:59:02,606 DEBUG [M:0;jenkins-hbase20:46821] zookeeper.ZKUtil(398): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-06-08 16:59:02,607 WARN [M:0;jenkins-hbase20:46821] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-06-08 16:59:02,607 INFO [M:0;jenkins-hbase20:46821] assignment.AssignmentManager(315): Stopping assignment manager 2023-06-08 16:59:02,607 INFO [M:0;jenkins-hbase20:46821] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-06-08 16:59:02,607 DEBUG [M:0;jenkins-hbase20:46821] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-06-08 16:59:02,607 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:02,607 DEBUG [M:0;jenkins-hbase20:46821] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:02,607 DEBUG [M:0;jenkins-hbase20:46821] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-06-08 16:59:02,607 DEBUG [M:0;jenkins-hbase20:46821] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:02,608 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-06-08 16:59:02,618 INFO [M:0;jenkins-hbase20:46821] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6d9ee322476c4c60a7156f56fbc737e9 2023-06-08 16:59:02,623 DEBUG [M:0;jenkins-hbase20:46821] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6d9ee322476c4c60a7156f56fbc737e9 as hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6d9ee322476c4c60a7156f56fbc737e9 2023-06-08 16:59:02,627 INFO [M:0;jenkins-hbase20:46821] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34001/user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6d9ee322476c4c60a7156f56fbc737e9, entries=8, sequenceid=66, filesize=6.3 K 2023-06-08 16:59:02,627 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=66, compaction requested=false 2023-06-08 16:59:02,629 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-06-08 16:59:02,629 DEBUG [M:0;jenkins-hbase20:46821] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-06-08 16:59:02,629 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/fb324b44-1242-d6fb-d6d7-04d945f4f59c/MasterData/WALs/jenkins-hbase20.apache.org,46821,1686243541276 2023-06-08 16:59:02,631 INFO [M:0;jenkins-hbase20:46821] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-06-08 16:59:02,631 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-06-08 16:59:02,632 INFO [M:0;jenkins-hbase20:46821] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46821 2023-06-08 16:59:02,634 DEBUG [M:0;jenkins-hbase20:46821] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,46821,1686243541276 already deleted, retry=false 2023-06-08 16:59:02,782 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:02,782 INFO [M:0;jenkins-hbase20:46821] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46821,1686243541276; zookeeper connection closed. 2023-06-08 16:59:02,782 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): master:46821-0x101cba8992f0000, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:02,882 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:02,882 DEBUG [Listener at localhost.localdomain/36349-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x101cba8992f0001, quorum=127.0.0.1:58855, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-06-08 16:59:02,882 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38189,1686243541315; zookeeper connection closed. 2023-06-08 16:59:02,883 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3652c2a9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3652c2a9 2023-06-08 16:59:02,883 INFO [Listener at localhost.localdomain/36349] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-06-08 16:59:02,883 WARN [Listener at localhost.localdomain/36349] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:59:02,887 INFO [Listener at localhost.localdomain/36349] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:59:02,992 WARN [BP-166000238-148.251.75.209-1686243540818 heartbeating to localhost.localdomain/127.0.0.1:34001] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-06-08 16:59:02,993 WARN [BP-166000238-148.251.75.209-1686243540818 heartbeating to localhost.localdomain/127.0.0.1:34001] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-166000238-148.251.75.209-1686243540818 (Datanode Uuid 13181876-b43c-4d9f-a965-32bbf630d91d) service to localhost.localdomain/127.0.0.1:34001 2023-06-08 16:59:02,995 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/dfs/data/data3/current/BP-166000238-148.251.75.209-1686243540818] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:02,996 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/dfs/data/data4/current/BP-166000238-148.251.75.209-1686243540818] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:02,997 WARN [Listener at localhost.localdomain/36349] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-06-08 16:59:03,002 INFO [Listener at localhost.localdomain/36349] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-06-08 16:59:03,099 WARN [BP-166000238-148.251.75.209-1686243540818 heartbeating to localhost.localdomain/127.0.0.1:34001] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-166000238-148.251.75.209-1686243540818 (Datanode Uuid 597dd3be-64aa-40bb-93e1-9f2692a22759) service to localhost.localdomain/127.0.0.1:34001 2023-06-08 16:59:03,101 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/dfs/data/data1/current/BP-166000238-148.251.75.209-1686243540818] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:03,102 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/38a08fcb-64a6-e611-9893-d33447095f80/cluster_82b7012c-ca81-fbbe-981e-82f5fdf7f913/dfs/data/data2/current/BP-166000238-148.251.75.209-1686243540818] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-06-08 16:59:03,120 INFO [Listener at localhost.localdomain/36349] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-06-08 16:59:03,234 INFO [Listener at localhost.localdomain/36349] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-06-08 16:59:03,244 INFO [Listener at localhost.localdomain/36349] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-06-08 16:59:03,255 INFO [Listener at localhost.localdomain/36349] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=132 (was 107) - Thread LEAK? -, OpenFileDescriptor=561 (was 538) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=61 (was 61), ProcessCount=182 (was 182), AvailableMemoryMB=1620 (was 1629)