2023-05-27 22:55:28,139 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea 2023-05-27 22:55:28,152 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-27 22:55:28,185 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=191, ProcessCount=169, AvailableMemoryMB=5176 2023-05-27 22:55:28,191 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 22:55:28,191 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260, deleteOnExit=true 2023-05-27 22:55:28,191 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 22:55:28,192 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/test.cache.data in system properties and HBase conf 2023-05-27 22:55:28,192 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 22:55:28,193 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/hadoop.log.dir in system properties and HBase conf 2023-05-27 22:55:28,193 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 22:55:28,194 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 22:55:28,194 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 22:55:28,303 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-27 22:55:28,677 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 22:55:28,681 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:55:28,681 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:55:28,681 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 22:55:28,682 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:55:28,682 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 22:55:28,682 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 22:55:28,682 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:55:28,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:55:28,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 22:55:28,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/nfs.dump.dir in system properties and HBase conf 2023-05-27 22:55:28,683 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/java.io.tmpdir in system properties and HBase conf 2023-05-27 22:55:28,684 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:55:28,684 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 22:55:28,684 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 22:55:29,166 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:55:29,180 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:55:29,184 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:55:29,444 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-27 22:55:29,587 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-27 22:55:29,601 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:55:29,636 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:55:29,668 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/java.io.tmpdir/Jetty_localhost_33075_hdfs____pnve54/webapp 2023-05-27 22:55:29,799 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33075 2023-05-27 22:55:29,807 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:55:29,818 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:55:29,818 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:55:30,306 WARN [Listener at localhost/43791] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:55:30,381 WARN [Listener at localhost/43791] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:55:30,398 WARN [Listener at localhost/43791] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:55:30,404 INFO [Listener at localhost/43791] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:55:30,408 INFO [Listener at localhost/43791] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/java.io.tmpdir/Jetty_localhost_35493_datanode____thtqqb/webapp 2023-05-27 22:55:30,501 INFO [Listener at localhost/43791] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35493 2023-05-27 22:55:30,787 WARN [Listener at localhost/40941] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:55:30,799 WARN [Listener at localhost/40941] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:55:30,801 WARN [Listener at localhost/40941] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:55:30,803 INFO [Listener at localhost/40941] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:55:30,808 INFO [Listener at localhost/40941] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/java.io.tmpdir/Jetty_localhost_41493_datanode____em9vyw/webapp 2023-05-27 22:55:30,905 INFO [Listener at localhost/40941] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41493 2023-05-27 22:55:30,913 WARN [Listener at localhost/33029] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:55:31,246 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88a716c9546c6c00: Processing first storage report for DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543 from datanode d3d6b6c9-bea8-49a1-b1f3-e2a88aa3f612 2023-05-27 22:55:31,248 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88a716c9546c6c00: from storage DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543 node DatanodeRegistration(127.0.0.1:34727, datanodeUuid=d3d6b6c9-bea8-49a1-b1f3-e2a88aa3f612, infoPort=42701, infoSecurePort=0, ipcPort=40941, storageInfo=lv=-57;cid=testClusterID;nsid=1878592559;c=1685228129253), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-27 22:55:31,248 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9039ecd20e128967: Processing first storage report for DS-f41ce752-1ebd-4ec6-bd21-d921224aa838 from datanode cf0422dd-3525-428f-a55e-76adb5005869 2023-05-27 22:55:31,248 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9039ecd20e128967: from storage DS-f41ce752-1ebd-4ec6-bd21-d921224aa838 node DatanodeRegistration(127.0.0.1:40243, datanodeUuid=cf0422dd-3525-428f-a55e-76adb5005869, infoPort=40817, infoSecurePort=0, ipcPort=33029, storageInfo=lv=-57;cid=testClusterID;nsid=1878592559;c=1685228129253), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:55:31,248 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x88a716c9546c6c00: Processing first storage report for DS-69d68934-3756-4137-93c1-c0c56c984412 from datanode d3d6b6c9-bea8-49a1-b1f3-e2a88aa3f612 2023-05-27 22:55:31,248 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x88a716c9546c6c00: from storage DS-69d68934-3756-4137-93c1-c0c56c984412 node DatanodeRegistration(127.0.0.1:34727, datanodeUuid=d3d6b6c9-bea8-49a1-b1f3-e2a88aa3f612, infoPort=42701, infoSecurePort=0, ipcPort=40941, storageInfo=lv=-57;cid=testClusterID;nsid=1878592559;c=1685228129253), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:55:31,248 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9039ecd20e128967: Processing first storage report for DS-e20f23a4-1435-4edf-bbdb-23b11f991981 from datanode cf0422dd-3525-428f-a55e-76adb5005869 2023-05-27 22:55:31,249 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9039ecd20e128967: from storage DS-e20f23a4-1435-4edf-bbdb-23b11f991981 node DatanodeRegistration(127.0.0.1:40243, datanodeUuid=cf0422dd-3525-428f-a55e-76adb5005869, infoPort=40817, infoSecurePort=0, ipcPort=33029, storageInfo=lv=-57;cid=testClusterID;nsid=1878592559;c=1685228129253), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:55:31,286 DEBUG [Listener at localhost/33029] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea 2023-05-27 22:55:31,347 INFO [Listener at localhost/33029] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/zookeeper_0, clientPort=52451, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 22:55:31,368 INFO [Listener at localhost/33029] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52451 2023-05-27 22:55:31,379 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:31,382 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:32,058 INFO [Listener at localhost/33029] util.FSUtils(471): Created version file at hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510 with version=8 2023-05-27 22:55:32,059 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase-staging 2023-05-27 22:55:32,382 INFO [Listener at localhost/33029] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-27 22:55:32,861 INFO [Listener at localhost/33029] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:55:32,892 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:55:32,892 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:55:32,893 INFO [Listener at localhost/33029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:55:32,893 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:55:32,893 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:55:33,032 INFO [Listener at localhost/33029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:55:33,110 DEBUG [Listener at localhost/33029] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-27 22:55:33,200 INFO [Listener at localhost/33029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41693 2023-05-27 22:55:33,210 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:33,213 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:33,232 INFO [Listener at localhost/33029] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41693 connecting to ZooKeeper ensemble=127.0.0.1:52451 2023-05-27 22:55:33,271 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:416930x0, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:55:33,275 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41693-0x1006edb17070000 connected 2023-05-27 22:55:33,299 DEBUG [Listener at localhost/33029] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:55:33,300 DEBUG [Listener at localhost/33029] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:55:33,303 DEBUG [Listener at localhost/33029] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:55:33,311 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41693 2023-05-27 22:55:33,311 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41693 2023-05-27 22:55:33,311 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41693 2023-05-27 22:55:33,315 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41693 2023-05-27 22:55:33,316 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41693 2023-05-27 22:55:33,321 INFO [Listener at localhost/33029] master.HMaster(444): hbase.rootdir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510, hbase.cluster.distributed=false 2023-05-27 22:55:33,388 INFO [Listener at localhost/33029] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:55:33,388 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:55:33,388 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:55:33,388 INFO [Listener at localhost/33029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:55:33,388 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:55:33,389 INFO [Listener at localhost/33029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:55:33,393 INFO [Listener at localhost/33029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:55:33,396 INFO [Listener at localhost/33029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36521 2023-05-27 22:55:33,399 INFO [Listener at localhost/33029] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 22:55:33,405 DEBUG [Listener at localhost/33029] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 22:55:33,406 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:33,408 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:33,409 INFO [Listener at localhost/33029] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36521 connecting to ZooKeeper ensemble=127.0.0.1:52451 2023-05-27 22:55:33,413 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:365210x0, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:55:33,413 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36521-0x1006edb17070001 connected 2023-05-27 22:55:33,414 DEBUG [Listener at localhost/33029] zookeeper.ZKUtil(164): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:55:33,415 DEBUG [Listener at localhost/33029] zookeeper.ZKUtil(164): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:55:33,416 DEBUG [Listener at localhost/33029] zookeeper.ZKUtil(164): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:55:33,416 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36521 2023-05-27 22:55:33,417 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36521 2023-05-27 22:55:33,417 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36521 2023-05-27 22:55:33,417 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36521 2023-05-27 22:55:33,418 DEBUG [Listener at localhost/33029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36521 2023-05-27 22:55:33,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:33,429 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:55:33,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:33,449 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:55:33,449 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:55:33,449 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:33,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:55:33,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41693,1685228132211 from backup master directory 2023-05-27 22:55:33,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:55:33,455 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:33,456 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:55:33,456 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:55:33,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:33,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-27 22:55:33,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-27 22:55:33,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase.id with ID: 67d994fc-e923-4939-9283-fed2ab3db3df 2023-05-27 22:55:33,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:33,597 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:33,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0862b985 to 127.0.0.1:52451 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:55:33,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c092136, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:55:33,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:55:33,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 22:55:33,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:55:33,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store-tmp 2023-05-27 22:55:33,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:33,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:55:33,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:55:33,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:55:33,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:55:33,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:55:33,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:55:33,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:55:33,791 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/WALs/jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:33,810 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41693%2C1685228132211, suffix=, logDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/WALs/jenkins-hbase4.apache.org,41693,1685228132211, archiveDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/oldWALs, maxLogs=10 2023-05-27 22:55:33,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:55:33,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/WALs/jenkins-hbase4.apache.org,41693,1685228132211/jenkins-hbase4.apache.org%2C41693%2C1685228132211.1685228133827 2023-05-27 22:55:33,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:55:33,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:55:33,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:33,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:55:33,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:55:33,908 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:55:33,915 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 22:55:33,941 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 22:55:33,954 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:33,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:55:33,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:55:33,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:55:33,979 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:55:33,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=738444, jitterRate=-0.061020657420158386}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:55:33,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:55:33,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 22:55:34,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 22:55:34,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 22:55:34,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 22:55:34,004 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-27 22:55:34,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 32 msec 2023-05-27 22:55:34,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 22:55:34,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 22:55:34,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 22:55:34,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 22:55:34,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 22:55:34,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 22:55:34,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 22:55:34,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 22:55:34,117 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:34,118 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 22:55:34,118 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 22:55:34,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 22:55:34,135 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:55:34,135 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:55:34,135 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:34,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41693,1685228132211, sessionid=0x1006edb17070000, setting cluster-up flag (Was=false) 2023-05-27 22:55:34,150 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:34,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 22:55:34,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:34,162 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:34,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 22:55:34,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:34,172 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.hbase-snapshot/.tmp 2023-05-27 22:55:34,221 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(951): ClusterId : 67d994fc-e923-4939-9283-fed2ab3db3df 2023-05-27 22:55:34,226 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 22:55:34,230 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 22:55:34,230 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 22:55:34,235 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 22:55:34,236 DEBUG [RS:0;jenkins-hbase4:36521] zookeeper.ReadOnlyZKClient(139): Connect 0x2d1a90ed to 127.0.0.1:52451 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:55:34,241 DEBUG [RS:0;jenkins-hbase4:36521] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23dd7b3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:55:34,241 DEBUG [RS:0;jenkins-hbase4:36521] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27e2fb9d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:55:34,264 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36521 2023-05-27 22:55:34,268 INFO [RS:0;jenkins-hbase4:36521] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 22:55:34,268 INFO [RS:0;jenkins-hbase4:36521] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 22:55:34,268 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 22:55:34,271 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,41693,1685228132211 with isa=jenkins-hbase4.apache.org/172.31.14.131:36521, startcode=1685228133387 2023-05-27 22:55:34,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 22:55:34,286 DEBUG [RS:0;jenkins-hbase4:36521] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 22:55:34,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:55:34,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:55:34,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:55:34,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:55:34,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 22:55:34,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:55:34,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685228164295 2023-05-27 22:55:34,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 22:55:34,301 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:55:34,301 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 22:55:34,307 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:55:34,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 22:55:34,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 22:55:34,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 22:55:34,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 22:55:34,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 22:55:34,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 22:55:34,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 22:55:34,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 22:55:34,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 22:55:34,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 22:55:34,324 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228134324,5,FailOnTimeoutGroup] 2023-05-27 22:55:34,325 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228134325,5,FailOnTimeoutGroup] 2023-05-27 22:55:34,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 22:55:34,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,346 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:55:34,347 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:55:34,347 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510 2023-05-27 22:55:34,377 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:34,381 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:55:34,384 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/info 2023-05-27 22:55:34,385 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:55:34,387 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:34,387 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:55:34,390 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:55:34,391 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:55:34,392 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:34,392 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:55:34,394 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/table 2023-05-27 22:55:34,395 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:55:34,396 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:34,398 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740 2023-05-27 22:55:34,399 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740 2023-05-27 22:55:34,403 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:55:34,405 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:55:34,409 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:55:34,410 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=762879, jitterRate=-0.029950350522994995}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:55:34,410 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:55:34,410 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:55:34,410 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:55:34,410 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:55:34,411 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:55:34,411 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:55:34,411 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:55:34,412 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:55:34,418 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:55:34,418 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 22:55:34,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 22:55:34,431 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60139, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 22:55:34,443 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 22:55:34,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,445 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 22:55:34,462 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510 2023-05-27 22:55:34,462 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43791 2023-05-27 22:55:34,462 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 22:55:34,467 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:55:34,468 DEBUG [RS:0;jenkins-hbase4:36521] zookeeper.ZKUtil(162): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,468 WARN [RS:0;jenkins-hbase4:36521] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:55:34,469 INFO [RS:0;jenkins-hbase4:36521] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:55:34,469 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,471 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36521,1685228133387] 2023-05-27 22:55:34,479 DEBUG [RS:0;jenkins-hbase4:36521] zookeeper.ZKUtil(162): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,488 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 22:55:34,497 INFO [RS:0;jenkins-hbase4:36521] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 22:55:34,516 INFO [RS:0;jenkins-hbase4:36521] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 22:55:34,519 INFO [RS:0;jenkins-hbase4:36521] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:55:34,519 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,520 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 22:55:34,526 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,527 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,527 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,527 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,527 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,527 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,527 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:55:34,528 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,528 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,528 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,528 DEBUG [RS:0;jenkins-hbase4:36521] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:55:34,529 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,529 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,529 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,545 INFO [RS:0;jenkins-hbase4:36521] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 22:55:34,547 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36521,1685228133387-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:34,563 INFO [RS:0;jenkins-hbase4:36521] regionserver.Replication(203): jenkins-hbase4.apache.org,36521,1685228133387 started 2023-05-27 22:55:34,563 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36521,1685228133387, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36521, sessionid=0x1006edb17070001 2023-05-27 22:55:34,563 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 22:55:34,563 DEBUG [RS:0;jenkins-hbase4:36521] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,563 DEBUG [RS:0;jenkins-hbase4:36521] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36521,1685228133387' 2023-05-27 22:55:34,563 DEBUG [RS:0;jenkins-hbase4:36521] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:55:34,564 DEBUG [RS:0;jenkins-hbase4:36521] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:55:34,564 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 22:55:34,564 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 22:55:34,565 DEBUG [RS:0;jenkins-hbase4:36521] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,565 DEBUG [RS:0;jenkins-hbase4:36521] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36521,1685228133387' 2023-05-27 22:55:34,565 DEBUG [RS:0;jenkins-hbase4:36521] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 22:55:34,565 DEBUG [RS:0;jenkins-hbase4:36521] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 22:55:34,566 DEBUG [RS:0;jenkins-hbase4:36521] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 22:55:34,566 INFO [RS:0;jenkins-hbase4:36521] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 22:55:34,566 INFO [RS:0;jenkins-hbase4:36521] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 22:55:34,597 DEBUG [jenkins-hbase4:41693] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 22:55:34,600 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36521,1685228133387, state=OPENING 2023-05-27 22:55:34,607 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 22:55:34,614 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:34,615 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:55:34,619 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36521,1685228133387}] 2023-05-27 22:55:34,676 INFO [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36521%2C1685228133387, suffix=, logDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387, archiveDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/oldWALs, maxLogs=32 2023-05-27 22:55:34,691 INFO [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228134679 2023-05-27 22:55:34,691 DEBUG [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:55:34,802 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:34,804 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 22:55:34,808 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 22:55:34,821 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 22:55:34,822 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:55:34,825 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36521%2C1685228133387.meta, suffix=.meta, logDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387, archiveDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/oldWALs, maxLogs=32 2023-05-27 22:55:34,839 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.meta.1685228134827.meta 2023-05-27 22:55:34,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:55:34,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:55:34,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 22:55:34,856 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 22:55:34,861 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 22:55:34,866 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 22:55:34,867 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:34,867 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 22:55:34,867 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 22:55:34,869 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:55:34,871 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/info 2023-05-27 22:55:34,871 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/info 2023-05-27 22:55:34,872 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:55:34,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:34,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:55:34,874 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:55:34,874 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:55:34,875 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:55:34,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:34,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:55:34,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/table 2023-05-27 22:55:34,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/table 2023-05-27 22:55:34,878 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:55:34,879 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:34,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740 2023-05-27 22:55:34,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740 2023-05-27 22:55:34,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:55:34,889 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:55:34,890 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=800001, jitterRate=0.01725427806377411}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:55:34,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:55:34,901 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685228134794 2023-05-27 22:55:34,919 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 22:55:34,920 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 22:55:34,921 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36521,1685228133387, state=OPEN 2023-05-27 22:55:34,923 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 22:55:34,923 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:55:34,928 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 22:55:34,929 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36521,1685228133387 in 304 msec 2023-05-27 22:55:34,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 22:55:34,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 500 msec 2023-05-27 22:55:34,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 724 msec 2023-05-27 22:55:34,940 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685228134940, completionTime=-1 2023-05-27 22:55:34,941 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 22:55:34,941 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 22:55:35,002 DEBUG [hconnection-0x2ed704b0-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:55:35,007 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:55:35,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 22:55:35,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685228195031 2023-05-27 22:55:35,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685228255031 2023-05-27 22:55:35,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 90 msec 2023-05-27 22:55:35,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41693,1685228132211-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:35,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41693,1685228132211-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:35,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41693,1685228132211-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:35,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41693, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:35,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 22:55:35,061 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 22:55:35,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 22:55:35,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:55:35,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 22:55:35,083 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:55:35,085 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:55:35,105 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,108 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d empty. 2023-05-27 22:55:35,108 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,108 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 22:55:35,151 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 22:55:35,153 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c6f7fd485edb162049f588b53c69eb6d, NAME => 'hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp 2023-05-27 22:55:35,167 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:35,167 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c6f7fd485edb162049f588b53c69eb6d, disabling compactions & flushes 2023-05-27 22:55:35,167 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,167 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,167 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. after waiting 0 ms 2023-05-27 22:55:35,168 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,168 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,168 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c6f7fd485edb162049f588b53c69eb6d: 2023-05-27 22:55:35,171 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:55:35,186 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228135174"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228135174"}]},"ts":"1685228135174"} 2023-05-27 22:55:35,210 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:55:35,212 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:55:35,216 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228135212"}]},"ts":"1685228135212"} 2023-05-27 22:55:35,219 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 22:55:35,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c6f7fd485edb162049f588b53c69eb6d, ASSIGN}] 2023-05-27 22:55:35,230 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c6f7fd485edb162049f588b53c69eb6d, ASSIGN 2023-05-27 22:55:35,231 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c6f7fd485edb162049f588b53c69eb6d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36521,1685228133387; forceNewPlan=false, retain=false 2023-05-27 22:55:35,382 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c6f7fd485edb162049f588b53c69eb6d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:35,383 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228135382"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228135382"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228135382"}]},"ts":"1685228135382"} 2023-05-27 22:55:35,387 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c6f7fd485edb162049f588b53c69eb6d, server=jenkins-hbase4.apache.org,36521,1685228133387}] 2023-05-27 22:55:35,549 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,551 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c6f7fd485edb162049f588b53c69eb6d, NAME => 'hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:55:35,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:35,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,557 INFO [StoreOpener-c6f7fd485edb162049f588b53c69eb6d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,560 DEBUG [StoreOpener-c6f7fd485edb162049f588b53c69eb6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/info 2023-05-27 22:55:35,560 DEBUG [StoreOpener-c6f7fd485edb162049f588b53c69eb6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/info 2023-05-27 22:55:35,560 INFO [StoreOpener-c6f7fd485edb162049f588b53c69eb6d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c6f7fd485edb162049f588b53c69eb6d columnFamilyName info 2023-05-27 22:55:35,561 INFO [StoreOpener-c6f7fd485edb162049f588b53c69eb6d-1] regionserver.HStore(310): Store=c6f7fd485edb162049f588b53c69eb6d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:35,563 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,564 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,568 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:55:35,573 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:55:35,573 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c6f7fd485edb162049f588b53c69eb6d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=747556, jitterRate=-0.04943428933620453}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:55:35,573 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c6f7fd485edb162049f588b53c69eb6d: 2023-05-27 22:55:35,576 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d., pid=6, masterSystemTime=1685228135540 2023-05-27 22:55:35,580 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,580 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:55:35,581 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c6f7fd485edb162049f588b53c69eb6d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:35,582 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228135581"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228135581"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228135581"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228135581"}]},"ts":"1685228135581"} 2023-05-27 22:55:35,591 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 22:55:35,591 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c6f7fd485edb162049f588b53c69eb6d, server=jenkins-hbase4.apache.org,36521,1685228133387 in 200 msec 2023-05-27 22:55:35,594 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 22:55:35,595 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c6f7fd485edb162049f588b53c69eb6d, ASSIGN in 364 msec 2023-05-27 22:55:35,596 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:55:35,596 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228135596"}]},"ts":"1685228135596"} 2023-05-27 22:55:35,599 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 22:55:35,603 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:55:35,606 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 532 msec 2023-05-27 22:55:35,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 22:55:35,686 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:55:35,686 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:35,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 22:55:35,745 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:55:35,751 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 32 msec 2023-05-27 22:55:35,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 22:55:35,776 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:55:35,781 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-05-27 22:55:35,789 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 22:55:35,792 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 22:55:35,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.335sec 2023-05-27 22:55:35,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 22:55:35,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 22:55:35,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 22:55:35,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41693,1685228132211-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 22:55:35,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41693,1685228132211-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 22:55:35,808 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 22:55:35,826 DEBUG [Listener at localhost/33029] zookeeper.ReadOnlyZKClient(139): Connect 0x088e51b8 to 127.0.0.1:52451 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:55:35,830 DEBUG [Listener at localhost/33029] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b8cb99f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:55:35,842 DEBUG [hconnection-0x40ac2ab4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:55:35,852 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:55:35,866 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:55:35,866 INFO [Listener at localhost/33029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:55:35,876 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 22:55:35,876 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:55:35,877 INFO [Listener at localhost/33029] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 22:55:35,888 DEBUG [Listener at localhost/33029] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 22:55:35,892 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54420, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 22:55:35,900 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 22:55:35,900 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 22:55:35,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:55:35,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-27 22:55:35,908 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:55:35,910 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:55:35,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-27 22:55:35,914 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820 2023-05-27 22:55:35,915 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820 empty. 2023-05-27 22:55:35,917 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820 2023-05-27 22:55:35,917 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-27 22:55:35,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:55:35,940 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-27 22:55:35,941 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 263706a96f5d5637fd790130c3614820, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/.tmp 2023-05-27 22:55:35,956 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:35,956 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 263706a96f5d5637fd790130c3614820, disabling compactions & flushes 2023-05-27 22:55:35,956 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:35,956 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:35,956 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. after waiting 0 ms 2023-05-27 22:55:35,956 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:35,956 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:35,956 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:55:35,960 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:55:35,962 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685228135961"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228135961"}]},"ts":"1685228135961"} 2023-05-27 22:55:35,965 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:55:35,966 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:55:35,966 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228135966"}]},"ts":"1685228135966"} 2023-05-27 22:55:35,969 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-27 22:55:35,972 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=263706a96f5d5637fd790130c3614820, ASSIGN}] 2023-05-27 22:55:35,974 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=263706a96f5d5637fd790130c3614820, ASSIGN 2023-05-27 22:55:35,976 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=263706a96f5d5637fd790130c3614820, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36521,1685228133387; forceNewPlan=false, retain=false 2023-05-27 22:55:36,127 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=263706a96f5d5637fd790130c3614820, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:36,127 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685228136127"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228136127"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228136127"}]},"ts":"1685228136127"} 2023-05-27 22:55:36,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 263706a96f5d5637fd790130c3614820, server=jenkins-hbase4.apache.org,36521,1685228133387}] 2023-05-27 22:55:36,290 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:36,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 263706a96f5d5637fd790130c3614820, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:55:36,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:55:36,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,294 INFO [StoreOpener-263706a96f5d5637fd790130c3614820-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,297 DEBUG [StoreOpener-263706a96f5d5637fd790130c3614820-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info 2023-05-27 22:55:36,297 DEBUG [StoreOpener-263706a96f5d5637fd790130c3614820-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info 2023-05-27 22:55:36,298 INFO [StoreOpener-263706a96f5d5637fd790130c3614820-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 263706a96f5d5637fd790130c3614820 columnFamilyName info 2023-05-27 22:55:36,300 INFO [StoreOpener-263706a96f5d5637fd790130c3614820-1] regionserver.HStore(310): Store=263706a96f5d5637fd790130c3614820/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:55:36,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 263706a96f5d5637fd790130c3614820 2023-05-27 22:55:36,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:55:36,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 263706a96f5d5637fd790130c3614820; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807559, jitterRate=0.02686476707458496}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:55:36,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:55:36,314 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820., pid=11, masterSystemTime=1685228136284 2023-05-27 22:55:36,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:36,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:36,318 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=263706a96f5d5637fd790130c3614820, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:55:36,318 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685228136318"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228136318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228136318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228136318"}]},"ts":"1685228136318"} 2023-05-27 22:55:36,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 22:55:36,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 263706a96f5d5637fd790130c3614820, server=jenkins-hbase4.apache.org,36521,1685228133387 in 190 msec 2023-05-27 22:55:36,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 22:55:36,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=263706a96f5d5637fd790130c3614820, ASSIGN in 353 msec 2023-05-27 22:55:36,329 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:55:36,330 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228136329"}]},"ts":"1685228136329"} 2023-05-27 22:55:36,332 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-27 22:55:36,335 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:55:36,337 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 431 msec 2023-05-27 22:55:40,375 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-27 22:55:40,494 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 22:55:40,495 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 22:55:40,496 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-27 22:55:42,378 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 22:55:42,378 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-27 22:55:45,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41693] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:55:45,932 INFO [Listener at localhost/33029] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-27 22:55:45,936 DEBUG [Listener at localhost/33029] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-27 22:55:45,937 DEBUG [Listener at localhost/33029] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:55:57,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36521] regionserver.HRegion(9158): Flush requested on 263706a96f5d5637fd790130c3614820 2023-05-27 22:55:57,965 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 263706a96f5d5637fd790130c3614820 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 22:55:58,032 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/e2c3c71aa53547a68e5d77b51244a2e2 2023-05-27 22:55:58,076 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/e2c3c71aa53547a68e5d77b51244a2e2 as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2 2023-05-27 22:55:58,085 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2, entries=7, sequenceid=11, filesize=12.1 K 2023-05-27 22:55:58,087 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 263706a96f5d5637fd790130c3614820 in 122ms, sequenceid=11, compaction requested=false 2023-05-27 22:55:58,089 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:56:06,176 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:08,379 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:10,582 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:12,785 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:12,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36521] regionserver.HRegion(9158): Flush requested on 263706a96f5d5637fd790130c3614820 2023-05-27 22:56:12,786 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 263706a96f5d5637fd790130c3614820 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 22:56:12,987 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:13,007 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/a4094590b9414a7fae23eddef38ec153 2023-05-27 22:56:13,018 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/a4094590b9414a7fae23eddef38ec153 as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/a4094590b9414a7fae23eddef38ec153 2023-05-27 22:56:13,026 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/a4094590b9414a7fae23eddef38ec153, entries=7, sequenceid=21, filesize=12.1 K 2023-05-27 22:56:13,228 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:13,228 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 263706a96f5d5637fd790130c3614820 in 442ms, sequenceid=21, compaction requested=false 2023-05-27 22:56:13,228 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:56:13,229 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-27 22:56:13,229 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 22:56:13,230 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2 because midkey is the same as first or last row 2023-05-27 22:56:14,988 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:17,191 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:17,192 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36521%2C1685228133387:(num 1685228134679) roll requested 2023-05-27 22:56:17,193 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:17,404 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:17,406 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228134679 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228177193 2023-05-27 22:56:17,407 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:17,407 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228134679 is not closed yet, will try archiving it next time 2023-05-27 22:56:27,205 INFO [Listener at localhost/33029] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-27 22:56:32,208 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:32,208 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:32,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36521] regionserver.HRegion(9158): Flush requested on 263706a96f5d5637fd790130c3614820 2023-05-27 22:56:32,208 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36521%2C1685228133387:(num 1685228177193) roll requested 2023-05-27 22:56:32,208 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 263706a96f5d5637fd790130c3614820 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 22:56:34,210 INFO [Listener at localhost/33029] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-27 22:56:37,210 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:37,210 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:37,226 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:37,226 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK], DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK]] 2023-05-27 22:56:37,226 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228177193 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228192208 2023-05-27 22:56:37,227 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34727,DS-74f5ab3f-8fd9-4954-9a00-71c4f8ece543,DISK], DatanodeInfoWithStorage[127.0.0.1:40243,DS-f41ce752-1ebd-4ec6-bd21-d921224aa838,DISK]] 2023-05-27 22:56:37,227 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387/jenkins-hbase4.apache.org%2C36521%2C1685228133387.1685228177193 is not closed yet, will try archiving it next time 2023-05-27 22:56:37,229 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/e8c46948064e44f0845c6b6b6a087541 2023-05-27 22:56:37,240 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/e8c46948064e44f0845c6b6b6a087541 as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e8c46948064e44f0845c6b6b6a087541 2023-05-27 22:56:37,265 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e8c46948064e44f0845c6b6b6a087541, entries=7, sequenceid=31, filesize=12.1 K 2023-05-27 22:56:37,269 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 263706a96f5d5637fd790130c3614820 in 5061ms, sequenceid=31, compaction requested=true 2023-05-27 22:56:37,269 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:56:37,269 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-27 22:56:37,269 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 22:56:37,270 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2 because midkey is the same as first or last row 2023-05-27 22:56:37,272 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 22:56:37,272 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 22:56:37,276 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 22:56:37,278 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.HStore(1912): 263706a96f5d5637fd790130c3614820/info is initiating minor compaction (all files) 2023-05-27 22:56:37,278 INFO [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 263706a96f5d5637fd790130c3614820/info in TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:56:37,279 INFO [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/a4094590b9414a7fae23eddef38ec153, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e8c46948064e44f0845c6b6b6a087541] into tmpdir=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp, totalSize=36.3 K 2023-05-27 22:56:37,280 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] compactions.Compactor(207): Compacting e2c3c71aa53547a68e5d77b51244a2e2, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685228145943 2023-05-27 22:56:37,281 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] compactions.Compactor(207): Compacting a4094590b9414a7fae23eddef38ec153, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685228159966 2023-05-27 22:56:37,281 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] compactions.Compactor(207): Compacting e8c46948064e44f0845c6b6b6a087541, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685228174787 2023-05-27 22:56:37,309 INFO [RS:0;jenkins-hbase4:36521-shortCompactions-0] throttle.PressureAwareThroughputController(145): 263706a96f5d5637fd790130c3614820#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 22:56:37,334 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/81cd8a2b59db4a1abcbe8fdce50c6503 as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/81cd8a2b59db4a1abcbe8fdce50c6503 2023-05-27 22:56:37,353 INFO [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 263706a96f5d5637fd790130c3614820/info of 263706a96f5d5637fd790130c3614820 into 81cd8a2b59db4a1abcbe8fdce50c6503(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 22:56:37,353 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:56:37,353 INFO [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820., storeName=263706a96f5d5637fd790130c3614820/info, priority=13, startTime=1685228197272; duration=0sec 2023-05-27 22:56:37,354 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-27 22:56:37,355 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 22:56:37,355 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/81cd8a2b59db4a1abcbe8fdce50c6503 because midkey is the same as first or last row 2023-05-27 22:56:37,355 DEBUG [RS:0;jenkins-hbase4:36521-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 22:56:49,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36521] regionserver.HRegion(9158): Flush requested on 263706a96f5d5637fd790130c3614820 2023-05-27 22:56:49,331 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 263706a96f5d5637fd790130c3614820 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 22:56:49,349 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/2057344e4e9c4b289b0a6c4234b97c2b 2023-05-27 22:56:49,356 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/2057344e4e9c4b289b0a6c4234b97c2b as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/2057344e4e9c4b289b0a6c4234b97c2b 2023-05-27 22:56:49,363 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/2057344e4e9c4b289b0a6c4234b97c2b, entries=7, sequenceid=42, filesize=12.1 K 2023-05-27 22:56:49,365 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 263706a96f5d5637fd790130c3614820 in 34ms, sequenceid=42, compaction requested=false 2023-05-27 22:56:49,365 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:56:49,365 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-27 22:56:49,365 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 22:56:49,365 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/81cd8a2b59db4a1abcbe8fdce50c6503 because midkey is the same as first or last row 2023-05-27 22:56:57,339 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 22:56:57,340 INFO [Listener at localhost/33029] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 22:56:57,340 DEBUG [Listener at localhost/33029] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x088e51b8 to 127.0.0.1:52451 2023-05-27 22:56:57,340 DEBUG [Listener at localhost/33029] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:56:57,341 DEBUG [Listener at localhost/33029] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 22:56:57,341 DEBUG [Listener at localhost/33029] util.JVMClusterUtil(257): Found active master hash=1188026390, stopped=false 2023-05-27 22:56:57,342 INFO [Listener at localhost/33029] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:56:57,345 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:56:57,345 INFO [Listener at localhost/33029] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 22:56:57,345 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:57,345 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:56:57,346 DEBUG [Listener at localhost/33029] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0862b985 to 127.0.0.1:52451 2023-05-27 22:56:57,346 DEBUG [Listener at localhost/33029] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:56:57,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:56:57,347 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:56:57,347 INFO [Listener at localhost/33029] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36521,1685228133387' ***** 2023-05-27 22:56:57,347 INFO [Listener at localhost/33029] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 22:56:57,347 INFO [RS:0;jenkins-hbase4:36521] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 22:56:57,347 INFO [RS:0;jenkins-hbase4:36521] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 22:56:57,347 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 22:56:57,347 INFO [RS:0;jenkins-hbase4:36521] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 22:56:57,348 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(3303): Received CLOSE for c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:56:57,348 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(3303): Received CLOSE for 263706a96f5d5637fd790130c3614820 2023-05-27 22:56:57,349 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:56:57,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c6f7fd485edb162049f588b53c69eb6d, disabling compactions & flushes 2023-05-27 22:56:57,349 DEBUG [RS:0;jenkins-hbase4:36521] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2d1a90ed to 127.0.0.1:52451 2023-05-27 22:56:57,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:56:57,349 DEBUG [RS:0;jenkins-hbase4:36521] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:56:57,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:56:57,349 INFO [RS:0;jenkins-hbase4:36521] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 22:56:57,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. after waiting 0 ms 2023-05-27 22:56:57,350 INFO [RS:0;jenkins-hbase4:36521] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 22:56:57,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:56:57,350 INFO [RS:0;jenkins-hbase4:36521] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 22:56:57,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c6f7fd485edb162049f588b53c69eb6d 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 22:56:57,350 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 22:56:57,350 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-27 22:56:57,350 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, c6f7fd485edb162049f588b53c69eb6d=hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d., 263706a96f5d5637fd790130c3614820=TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.} 2023-05-27 22:56:57,351 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:56:57,351 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:56:57,351 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:56:57,351 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:56:57,351 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:56:57,351 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-27 22:56:57,352 DEBUG [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1504): Waiting on 1588230740, 263706a96f5d5637fd790130c3614820, c6f7fd485edb162049f588b53c69eb6d 2023-05-27 22:56:57,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/.tmp/info/ddc7a3ca42a8447f995d4deeca8e5dbc 2023-05-27 22:56:57,376 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/.tmp/info/9113aa7fbc9147478fde34aa28e2370a 2023-05-27 22:56:57,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/.tmp/info/ddc7a3ca42a8447f995d4deeca8e5dbc as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/info/ddc7a3ca42a8447f995d4deeca8e5dbc 2023-05-27 22:56:57,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/info/ddc7a3ca42a8447f995d4deeca8e5dbc, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 22:56:57,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c6f7fd485edb162049f588b53c69eb6d in 48ms, sequenceid=6, compaction requested=false 2023-05-27 22:56:57,399 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/.tmp/table/3954dfee314e4c6994b14a2d34750d7c 2023-05-27 22:56:57,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/namespace/c6f7fd485edb162049f588b53c69eb6d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 22:56:57,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:56:57,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c6f7fd485edb162049f588b53c69eb6d: 2023-05-27 22:56:57,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685228135070.c6f7fd485edb162049f588b53c69eb6d. 2023-05-27 22:56:57,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 263706a96f5d5637fd790130c3614820, disabling compactions & flushes 2023-05-27 22:56:57,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:56:57,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:56:57,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. after waiting 0 ms 2023-05-27 22:56:57,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:56:57,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 263706a96f5d5637fd790130c3614820 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-27 22:56:57,409 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/.tmp/info/9113aa7fbc9147478fde34aa28e2370a as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/info/9113aa7fbc9147478fde34aa28e2370a 2023-05-27 22:56:57,418 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/info/9113aa7fbc9147478fde34aa28e2370a, entries=20, sequenceid=14, filesize=7.4 K 2023-05-27 22:56:57,422 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/.tmp/table/3954dfee314e4c6994b14a2d34750d7c as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/table/3954dfee314e4c6994b14a2d34750d7c 2023-05-27 22:56:57,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/2990ff59b1f8483b82414f81a17aefff 2023-05-27 22:56:57,430 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/table/3954dfee314e4c6994b14a2d34750d7c, entries=4, sequenceid=14, filesize=4.8 K 2023-05-27 22:56:57,430 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/.tmp/info/2990ff59b1f8483b82414f81a17aefff as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/2990ff59b1f8483b82414f81a17aefff 2023-05-27 22:56:57,431 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 80ms, sequenceid=14, compaction requested=false 2023-05-27 22:56:57,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/2990ff59b1f8483b82414f81a17aefff, entries=3, sequenceid=48, filesize=7.9 K 2023-05-27 22:56:57,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 263706a96f5d5637fd790130c3614820 in 35ms, sequenceid=48, compaction requested=true 2023-05-27 22:56:57,444 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/a4094590b9414a7fae23eddef38ec153, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e8c46948064e44f0845c6b6b6a087541] to archive 2023-05-27 22:56:57,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-27 22:56:57,445 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 22:56:57,447 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 22:56:57,448 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:56:57,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:56:57,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 22:56:57,453 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2 to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/archive/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e2c3c71aa53547a68e5d77b51244a2e2 2023-05-27 22:56:57,455 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/a4094590b9414a7fae23eddef38ec153 to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/archive/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/a4094590b9414a7fae23eddef38ec153 2023-05-27 22:56:57,457 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e8c46948064e44f0845c6b6b6a087541 to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/archive/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/info/e8c46948064e44f0845c6b6b6a087541 2023-05-27 22:56:57,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/data/default/TestLogRolling-testSlowSyncLogRolling/263706a96f5d5637fd790130c3614820/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-27 22:56:57,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:56:57,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 263706a96f5d5637fd790130c3614820: 2023-05-27 22:56:57,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685228135900.263706a96f5d5637fd790130c3614820. 2023-05-27 22:56:57,552 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36521,1685228133387; all regions closed. 2023-05-27 22:56:57,554 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:56:57,567 DEBUG [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/oldWALs 2023-05-27 22:56:57,567 INFO [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36521%2C1685228133387.meta:.meta(num 1685228134827) 2023-05-27 22:56:57,568 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/WALs/jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:56:57,585 DEBUG [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/oldWALs 2023-05-27 22:56:57,585 INFO [RS:0;jenkins-hbase4:36521] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36521%2C1685228133387:(num 1685228192208) 2023-05-27 22:56:57,585 DEBUG [RS:0;jenkins-hbase4:36521] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:56:57,585 INFO [RS:0;jenkins-hbase4:36521] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:56:57,586 INFO [RS:0;jenkins-hbase4:36521] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-27 22:56:57,586 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:56:57,587 INFO [RS:0;jenkins-hbase4:36521] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36521 2023-05-27 22:56:57,597 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36521,1685228133387 2023-05-27 22:56:57,597 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:56:57,597 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:56:57,599 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36521,1685228133387] 2023-05-27 22:56:57,599 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36521,1685228133387; numProcessing=1 2023-05-27 22:56:57,699 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:56:57,699 INFO [RS:0;jenkins-hbase4:36521] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36521,1685228133387; zookeeper connection closed. 2023-05-27 22:56:57,700 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): regionserver:36521-0x1006edb17070001, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:56:57,701 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2ae4bd6e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2ae4bd6e 2023-05-27 22:56:57,701 INFO [Listener at localhost/33029] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 22:56:57,701 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36521,1685228133387 already deleted, retry=false 2023-05-27 22:56:57,702 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36521,1685228133387 expired; onlineServers=0 2023-05-27 22:56:57,702 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41693,1685228132211' ***** 2023-05-27 22:56:57,702 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 22:56:57,703 DEBUG [M:0;jenkins-hbase4:41693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f2d22d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:56:57,703 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:56:57,703 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41693,1685228132211; all regions closed. 2023-05-27 22:56:57,703 DEBUG [M:0;jenkins-hbase4:41693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:56:57,703 DEBUG [M:0;jenkins-hbase4:41693] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 22:56:57,703 DEBUG [M:0;jenkins-hbase4:41693] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 22:56:57,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228134325] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228134325,5,FailOnTimeoutGroup] 2023-05-27 22:56:57,704 INFO [M:0;jenkins-hbase4:41693] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 22:56:57,704 INFO [M:0;jenkins-hbase4:41693] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 22:56:57,704 INFO [M:0;jenkins-hbase4:41693] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 22:56:57,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228134324] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228134324,5,FailOnTimeoutGroup] 2023-05-27 22:56:57,704 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 22:56:57,709 DEBUG [M:0;jenkins-hbase4:41693] master.HMaster(1512): Stopping service threads 2023-05-27 22:56:57,710 INFO [M:0;jenkins-hbase4:41693] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 22:56:57,711 INFO [M:0;jenkins-hbase4:41693] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 22:56:57,711 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 22:56:57,713 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 22:56:57,713 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:57,713 DEBUG [M:0;jenkins-hbase4:41693] zookeeper.ZKUtil(398): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 22:56:57,713 WARN [M:0;jenkins-hbase4:41693] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 22:56:57,713 INFO [M:0;jenkins-hbase4:41693] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 22:56:57,713 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:56:57,714 INFO [M:0;jenkins-hbase4:41693] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 22:56:57,715 DEBUG [M:0;jenkins-hbase4:41693] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:56:57,715 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:57,715 DEBUG [M:0;jenkins-hbase4:41693] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:57,715 DEBUG [M:0;jenkins-hbase4:41693] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:56:57,715 DEBUG [M:0;jenkins-hbase4:41693] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:57,715 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.28 KB heapSize=46.71 KB 2023-05-27 22:56:57,764 INFO [M:0;jenkins-hbase4:41693] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.28 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3ad47bbae8d14eb3b4b1b3eb82b0ebfa 2023-05-27 22:56:57,775 INFO [M:0;jenkins-hbase4:41693] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ad47bbae8d14eb3b4b1b3eb82b0ebfa 2023-05-27 22:56:57,777 DEBUG [M:0;jenkins-hbase4:41693] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3ad47bbae8d14eb3b4b1b3eb82b0ebfa as hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3ad47bbae8d14eb3b4b1b3eb82b0ebfa 2023-05-27 22:56:57,784 INFO [M:0;jenkins-hbase4:41693] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ad47bbae8d14eb3b4b1b3eb82b0ebfa 2023-05-27 22:56:57,784 INFO [M:0;jenkins-hbase4:41693] regionserver.HStore(1080): Added hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3ad47bbae8d14eb3b4b1b3eb82b0ebfa, entries=11, sequenceid=100, filesize=6.1 K 2023-05-27 22:56:57,785 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegion(2948): Finished flush of dataSize ~38.28 KB/39196, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 70ms, sequenceid=100, compaction requested=false 2023-05-27 22:56:57,787 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:57,787 DEBUG [M:0;jenkins-hbase4:41693] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:56:57,787 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/MasterData/WALs/jenkins-hbase4.apache.org,41693,1685228132211 2023-05-27 22:56:57,792 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:56:57,792 INFO [M:0;jenkins-hbase4:41693] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 22:56:57,793 INFO [M:0;jenkins-hbase4:41693] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41693 2023-05-27 22:56:57,827 DEBUG [M:0;jenkins-hbase4:41693] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41693,1685228132211 already deleted, retry=false 2023-05-27 22:56:57,930 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:56:57,930 INFO [M:0;jenkins-hbase4:41693] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41693,1685228132211; zookeeper connection closed. 2023-05-27 22:56:57,930 DEBUG [Listener at localhost/33029-EventThread] zookeeper.ZKWatcher(600): master:41693-0x1006edb17070000, quorum=127.0.0.1:52451, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:56:57,932 WARN [Listener at localhost/33029] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:56:57,934 INFO [Listener at localhost/33029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:56:58,039 WARN [BP-888987336-172.31.14.131-1685228129253 heartbeating to localhost/127.0.0.1:43791] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:56:58,039 WARN [BP-888987336-172.31.14.131-1685228129253 heartbeating to localhost/127.0.0.1:43791] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-888987336-172.31.14.131-1685228129253 (Datanode Uuid cf0422dd-3525-428f-a55e-76adb5005869) service to localhost/127.0.0.1:43791 2023-05-27 22:56:58,042 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/dfs/data/data3/current/BP-888987336-172.31.14.131-1685228129253] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:56:58,042 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/dfs/data/data4/current/BP-888987336-172.31.14.131-1685228129253] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:56:58,043 WARN [Listener at localhost/33029] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:56:58,045 INFO [Listener at localhost/33029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:56:58,148 WARN [BP-888987336-172.31.14.131-1685228129253 heartbeating to localhost/127.0.0.1:43791] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:56:58,148 WARN [BP-888987336-172.31.14.131-1685228129253 heartbeating to localhost/127.0.0.1:43791] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-888987336-172.31.14.131-1685228129253 (Datanode Uuid d3d6b6c9-bea8-49a1-b1f3-e2a88aa3f612) service to localhost/127.0.0.1:43791 2023-05-27 22:56:58,149 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/dfs/data/data1/current/BP-888987336-172.31.14.131-1685228129253] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:56:58,149 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/cluster_35d77e13-e973-b964-21dc-99ccf153f260/dfs/data/data2/current/BP-888987336-172.31.14.131-1685228129253] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:56:58,183 INFO [Listener at localhost/33029] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:56:58,296 INFO [Listener at localhost/33029] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 22:56:58,337 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 22:56:58,350 INFO [Listener at localhost/33029] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:43791 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43791 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:43791 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33029 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:43791 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:43791 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@37b33b07 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=442 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=60 (was 191), ProcessCount=169 (was 169), AvailableMemoryMB=4452 (was 5176) 2023-05-27 22:56:58,360 INFO [Listener at localhost/33029] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=442, MaxFileDescriptor=60000, SystemLoadAverage=60, ProcessCount=169, AvailableMemoryMB=4452 2023-05-27 22:56:58,361 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 22:56:58,361 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/hadoop.log.dir so I do NOT create it in target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab 2023-05-27 22:56:58,361 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/4b1d5951-f2ac-f023-2c43-c4b1e8758fea/hadoop.tmp.dir so I do NOT create it in target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab 2023-05-27 22:56:58,361 INFO [Listener at localhost/33029] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3, deleteOnExit=true 2023-05-27 22:56:58,362 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 22:56:58,362 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/test.cache.data in system properties and HBase conf 2023-05-27 22:56:58,362 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 22:56:58,362 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/hadoop.log.dir in system properties and HBase conf 2023-05-27 22:56:58,363 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 22:56:58,363 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 22:56:58,363 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 22:56:58,363 DEBUG [Listener at localhost/33029] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 22:56:58,364 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:56:58,364 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:56:58,364 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 22:56:58,364 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:56:58,365 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 22:56:58,365 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 22:56:58,365 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:56:58,365 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:56:58,366 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 22:56:58,366 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/nfs.dump.dir in system properties and HBase conf 2023-05-27 22:56:58,366 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir in system properties and HBase conf 2023-05-27 22:56:58,366 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:56:58,366 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 22:56:58,367 INFO [Listener at localhost/33029] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 22:56:58,369 WARN [Listener at localhost/33029] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:56:58,372 WARN [Listener at localhost/33029] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:56:58,372 WARN [Listener at localhost/33029] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:56:58,442 WARN [Listener at localhost/33029] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:56:58,446 INFO [Listener at localhost/33029] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:56:58,453 INFO [Listener at localhost/33029] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_36467_hdfs____.1a049w/webapp 2023-05-27 22:56:58,533 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:56:58,580 INFO [Listener at localhost/33029] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36467 2023-05-27 22:56:58,582 WARN [Listener at localhost/33029] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:56:58,585 WARN [Listener at localhost/33029] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:56:58,585 WARN [Listener at localhost/33029] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:56:58,630 WARN [Listener at localhost/44813] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:56:58,646 WARN [Listener at localhost/44813] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:56:58,650 WARN [Listener at localhost/44813] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:56:58,652 INFO [Listener at localhost/44813] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:56:58,658 INFO [Listener at localhost/44813] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_38613_datanode____.l56s7a/webapp 2023-05-27 22:56:58,748 INFO [Listener at localhost/44813] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38613 2023-05-27 22:56:58,755 WARN [Listener at localhost/38203] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:56:58,770 WARN [Listener at localhost/38203] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:56:58,772 WARN [Listener at localhost/38203] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:56:58,773 INFO [Listener at localhost/38203] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:56:58,776 INFO [Listener at localhost/38203] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_36033_datanode____.sidsgc/webapp 2023-05-27 22:56:58,939 INFO [Listener at localhost/38203] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36033 2023-05-27 22:56:58,948 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x942989899de58d09: Processing first storage report for DS-6a8213fd-cd62-4a90-81df-bf520a89a643 from datanode 2fcaa603-6461-47f4-912e-80da2ed233ee 2023-05-27 22:56:58,949 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x942989899de58d09: from storage DS-6a8213fd-cd62-4a90-81df-bf520a89a643 node DatanodeRegistration(127.0.0.1:34645, datanodeUuid=2fcaa603-6461-47f4-912e-80da2ed233ee, infoPort=37283, infoSecurePort=0, ipcPort=38203, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 22:56:58,949 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x942989899de58d09: Processing first storage report for DS-dd672ed7-1050-450a-9587-559341260eac from datanode 2fcaa603-6461-47f4-912e-80da2ed233ee 2023-05-27 22:56:58,949 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x942989899de58d09: from storage DS-dd672ed7-1050-450a-9587-559341260eac node DatanodeRegistration(127.0.0.1:34645, datanodeUuid=2fcaa603-6461-47f4-912e-80da2ed233ee, infoPort=37283, infoSecurePort=0, ipcPort=38203, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:56:58,962 WARN [Listener at localhost/38643] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:56:59,074 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2f638ac1f2153fd8: Processing first storage report for DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237 from datanode ad6b7259-200f-4fc4-8e40-263bca51be38 2023-05-27 22:56:59,074 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2f638ac1f2153fd8: from storage DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237 node DatanodeRegistration(127.0.0.1:38065, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=39615, infoSecurePort=0, ipcPort=38643, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:56:59,074 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2f638ac1f2153fd8: Processing first storage report for DS-5eb20716-d264-4745-8874-af0128071b34 from datanode ad6b7259-200f-4fc4-8e40-263bca51be38 2023-05-27 22:56:59,074 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2f638ac1f2153fd8: from storage DS-5eb20716-d264-4745-8874-af0128071b34 node DatanodeRegistration(127.0.0.1:38065, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=39615, infoSecurePort=0, ipcPort=38643, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:56:59,099 DEBUG [Listener at localhost/38643] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab 2023-05-27 22:56:59,102 INFO [Listener at localhost/38643] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/zookeeper_0, clientPort=53199, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 22:56:59,103 INFO [Listener at localhost/38643] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53199 2023-05-27 22:56:59,103 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,104 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,121 INFO [Listener at localhost/38643] util.FSUtils(471): Created version file at hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd with version=8 2023-05-27 22:56:59,121 INFO [Listener at localhost/38643] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase-staging 2023-05-27 22:56:59,123 INFO [Listener at localhost/38643] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:56:59,123 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:56:59,123 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:56:59,123 INFO [Listener at localhost/38643] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:56:59,123 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:56:59,124 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:56:59,124 INFO [Listener at localhost/38643] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:56:59,125 INFO [Listener at localhost/38643] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44839 2023-05-27 22:56:59,126 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,126 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,128 INFO [Listener at localhost/38643] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44839 connecting to ZooKeeper ensemble=127.0.0.1:53199 2023-05-27 22:56:59,134 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:448390x0, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:56:59,135 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44839-0x1006edc6de00000 connected 2023-05-27 22:56:59,155 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:56:59,156 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:56:59,156 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:56:59,157 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44839 2023-05-27 22:56:59,157 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44839 2023-05-27 22:56:59,158 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44839 2023-05-27 22:56:59,158 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44839 2023-05-27 22:56:59,161 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44839 2023-05-27 22:56:59,161 INFO [Listener at localhost/38643] master.HMaster(444): hbase.rootdir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd, hbase.cluster.distributed=false 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:56:59,176 INFO [Listener at localhost/38643] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:56:59,178 INFO [Listener at localhost/38643] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42231 2023-05-27 22:56:59,178 INFO [Listener at localhost/38643] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 22:56:59,179 DEBUG [Listener at localhost/38643] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 22:56:59,180 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,181 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,182 INFO [Listener at localhost/38643] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42231 connecting to ZooKeeper ensemble=127.0.0.1:53199 2023-05-27 22:56:59,184 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:422310x0, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:56:59,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42231-0x1006edc6de00001 connected 2023-05-27 22:56:59,185 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:56:59,186 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:56:59,186 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:56:59,187 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42231 2023-05-27 22:56:59,187 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42231 2023-05-27 22:56:59,188 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42231 2023-05-27 22:56:59,188 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42231 2023-05-27 22:56:59,188 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42231 2023-05-27 22:56:59,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,191 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:56:59,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,193 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:56:59,193 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:56:59,193 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:56:59,194 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:56:59,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44839,1685228219122 from backup master directory 2023-05-27 22:56:59,197 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,197 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:56:59,197 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:56:59,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,212 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/hbase.id with ID: 34945865-3213-473b-a000-bdce83b17d14 2023-05-27 22:56:59,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:56:59,227 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,236 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3a2bb7ae to 127.0.0.1:53199 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:56:59,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11b26864, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:56:59,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:56:59,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 22:56:59,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:56:59,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store-tmp 2023-05-27 22:56:59,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:56:59,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:56:59,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:59,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:59,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:56:59,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:59,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:56:59,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:56:59,254 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44839%2C1685228219122, suffix=, logDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122, archiveDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/oldWALs, maxLogs=10 2023-05-27 22:56:59,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122/jenkins-hbase4.apache.org%2C44839%2C1685228219122.1685228219257 2023-05-27 22:56:59,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK], DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] 2023-05-27 22:56:59,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:56:59,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:56:59,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:56:59,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:56:59,266 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:56:59,267 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 22:56:59,267 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 22:56:59,268 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:56:59,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:56:59,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:56:59,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:56:59,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=716252, jitterRate=-0.08923856914043427}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:56:59,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:56:59,276 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 22:56:59,277 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 22:56:59,277 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 22:56:59,277 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 22:56:59,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 22:56:59,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 22:56:59,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 22:56:59,281 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 22:56:59,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 22:56:59,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 22:56:59,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 22:56:59,294 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 22:56:59,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 22:56:59,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 22:56:59,298 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 22:56:59,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 22:56:59,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 22:56:59,303 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:56:59,303 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:56:59,303 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44839,1685228219122, sessionid=0x1006edc6de00000, setting cluster-up flag (Was=false) 2023-05-27 22:56:59,307 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,312 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 22:56:59,313 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,316 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 22:56:59,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:56:59,323 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.hbase-snapshot/.tmp 2023-05-27 22:56:59,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 22:56:59,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:56:59,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685228249331 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 22:56:59,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,335 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:56:59,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 22:56:59,335 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 22:56:59,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 22:56:59,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 22:56:59,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 22:56:59,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 22:56:59,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228219336,5,FailOnTimeoutGroup] 2023-05-27 22:56:59,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228219336,5,FailOnTimeoutGroup] 2023-05-27 22:56:59,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 22:56:59,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,336 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:56:59,352 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:56:59,353 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:56:59,353 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd 2023-05-27 22:56:59,363 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:56:59,365 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:56:59,366 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/info 2023-05-27 22:56:59,367 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:56:59,367 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,367 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:56:59,369 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:56:59,369 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:56:59,370 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,370 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:56:59,371 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/table 2023-05-27 22:56:59,371 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:56:59,372 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,373 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740 2023-05-27 22:56:59,374 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740 2023-05-27 22:56:59,376 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:56:59,377 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:56:59,379 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:56:59,379 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=823077, jitterRate=0.046597450971603394}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:56:59,379 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:56:59,379 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:56:59,380 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:56:59,380 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:56:59,380 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:56:59,380 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:56:59,380 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:56:59,380 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:56:59,381 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:56:59,381 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 22:56:59,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 22:56:59,383 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 22:56:59,385 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 22:56:59,390 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(951): ClusterId : 34945865-3213-473b-a000-bdce83b17d14 2023-05-27 22:56:59,390 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 22:56:59,392 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 22:56:59,393 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 22:56:59,396 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 22:56:59,397 DEBUG [RS:0;jenkins-hbase4:42231] zookeeper.ReadOnlyZKClient(139): Connect 0x12046b34 to 127.0.0.1:53199 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:56:59,401 DEBUG [RS:0;jenkins-hbase4:42231] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1cc39704, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:56:59,401 DEBUG [RS:0;jenkins-hbase4:42231] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18fc0044, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:56:59,410 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42231 2023-05-27 22:56:59,410 INFO [RS:0;jenkins-hbase4:42231] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 22:56:59,410 INFO [RS:0;jenkins-hbase4:42231] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 22:56:59,410 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 22:56:59,411 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44839,1685228219122 with isa=jenkins-hbase4.apache.org/172.31.14.131:42231, startcode=1685228219175 2023-05-27 22:56:59,411 DEBUG [RS:0;jenkins-hbase4:42231] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 22:56:59,414 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60225, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 22:56:59,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,416 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd 2023-05-27 22:56:59,416 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44813 2023-05-27 22:56:59,416 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 22:56:59,418 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:56:59,418 DEBUG [RS:0;jenkins-hbase4:42231] zookeeper.ZKUtil(162): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,418 WARN [RS:0;jenkins-hbase4:42231] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:56:59,418 INFO [RS:0;jenkins-hbase4:42231] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:56:59,418 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1946): logDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,419 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42231,1685228219175] 2023-05-27 22:56:59,422 DEBUG [RS:0;jenkins-hbase4:42231] zookeeper.ZKUtil(162): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,423 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 22:56:59,423 INFO [RS:0;jenkins-hbase4:42231] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 22:56:59,425 INFO [RS:0;jenkins-hbase4:42231] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 22:56:59,425 INFO [RS:0;jenkins-hbase4:42231] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:56:59,425 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,426 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 22:56:59,428 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,428 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,428 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,428 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,428 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,428 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,428 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:56:59,429 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,429 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,429 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,429 DEBUG [RS:0;jenkins-hbase4:42231] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:56:59,429 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,430 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,430 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,442 INFO [RS:0;jenkins-hbase4:42231] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 22:56:59,442 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42231,1685228219175-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,459 INFO [RS:0;jenkins-hbase4:42231] regionserver.Replication(203): jenkins-hbase4.apache.org,42231,1685228219175 started 2023-05-27 22:56:59,459 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42231,1685228219175, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42231, sessionid=0x1006edc6de00001 2023-05-27 22:56:59,459 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 22:56:59,459 DEBUG [RS:0;jenkins-hbase4:42231] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,459 DEBUG [RS:0;jenkins-hbase4:42231] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42231,1685228219175' 2023-05-27 22:56:59,459 DEBUG [RS:0;jenkins-hbase4:42231] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:56:59,460 DEBUG [RS:0;jenkins-hbase4:42231] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:56:59,460 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 22:56:59,460 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 22:56:59,460 DEBUG [RS:0;jenkins-hbase4:42231] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,460 DEBUG [RS:0;jenkins-hbase4:42231] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42231,1685228219175' 2023-05-27 22:56:59,460 DEBUG [RS:0;jenkins-hbase4:42231] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 22:56:59,461 DEBUG [RS:0;jenkins-hbase4:42231] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 22:56:59,461 DEBUG [RS:0;jenkins-hbase4:42231] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 22:56:59,461 INFO [RS:0;jenkins-hbase4:42231] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 22:56:59,461 INFO [RS:0;jenkins-hbase4:42231] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 22:56:59,535 DEBUG [jenkins-hbase4:44839] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 22:56:59,536 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42231,1685228219175, state=OPENING 2023-05-27 22:56:59,537 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 22:56:59,540 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:56:59,540 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42231,1685228219175}] 2023-05-27 22:56:59,540 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:56:59,564 INFO [RS:0;jenkins-hbase4:42231] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42231%2C1685228219175, suffix=, logDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175, archiveDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/oldWALs, maxLogs=32 2023-05-27 22:56:59,575 INFO [RS:0;jenkins-hbase4:42231] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.1685228219565 2023-05-27 22:56:59,575 DEBUG [RS:0;jenkins-hbase4:42231] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK], DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]] 2023-05-27 22:56:59,695 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,695 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 22:56:59,697 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39510, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 22:56:59,702 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 22:56:59,702 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:56:59,704 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta, suffix=.meta, logDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175, archiveDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/oldWALs, maxLogs=32 2023-05-27 22:56:59,715 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta.1685228219705.meta 2023-05-27 22:56:59,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK], DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]] 2023-05-27 22:56:59,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:56:59,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 22:56:59,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 22:56:59,716 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 22:56:59,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 22:56:59,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:56:59,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 22:56:59,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 22:56:59,718 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:56:59,719 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/info 2023-05-27 22:56:59,719 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/info 2023-05-27 22:56:59,719 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:56:59,720 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,720 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:56:59,721 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:56:59,721 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:56:59,721 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:56:59,722 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,722 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:56:59,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/table 2023-05-27 22:56:59,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740/table 2023-05-27 22:56:59,724 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:56:59,725 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:56:59,726 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740 2023-05-27 22:56:59,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/meta/1588230740 2023-05-27 22:56:59,730 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:56:59,732 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:56:59,733 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=842933, jitterRate=0.0718451589345932}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:56:59,733 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:56:59,734 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685228219695 2023-05-27 22:56:59,738 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 22:56:59,738 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 22:56:59,739 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42231,1685228219175, state=OPEN 2023-05-27 22:56:59,741 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 22:56:59,741 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:56:59,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 22:56:59,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42231,1685228219175 in 201 msec 2023-05-27 22:56:59,747 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 22:56:59,747 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 362 msec 2023-05-27 22:56:59,749 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 424 msec 2023-05-27 22:56:59,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685228219749, completionTime=-1 2023-05-27 22:56:59,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 22:56:59,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 22:56:59,752 DEBUG [hconnection-0xd1d2bfe-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:56:59,754 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:56:59,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 22:56:59,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685228279755 2023-05-27 22:56:59,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685228339756 2023-05-27 22:56:59,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-27 22:56:59,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44839,1685228219122-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44839,1685228219122-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44839,1685228219122-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44839, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 22:56:59,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 22:56:59,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:56:59,763 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 22:56:59,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 22:56:59,764 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:56:59,765 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:56:59,767 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:56:59,767 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38 empty. 2023-05-27 22:56:59,768 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:56:59,768 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 22:56:59,780 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 22:56:59,781 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3c1df9bdde90309b097a8fb8043a5f38, NAME => 'hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp 2023-05-27 22:56:59,790 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:56:59,790 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 3c1df9bdde90309b097a8fb8043a5f38, disabling compactions & flushes 2023-05-27 22:56:59,790 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:56:59,790 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:56:59,790 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. after waiting 0 ms 2023-05-27 22:56:59,790 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:56:59,790 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:56:59,790 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 3c1df9bdde90309b097a8fb8043a5f38: 2023-05-27 22:56:59,793 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:56:59,794 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228219794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228219794"}]},"ts":"1685228219794"} 2023-05-27 22:56:59,797 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:56:59,798 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:56:59,798 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228219798"}]},"ts":"1685228219798"} 2023-05-27 22:56:59,800 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 22:56:59,806 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3c1df9bdde90309b097a8fb8043a5f38, ASSIGN}] 2023-05-27 22:56:59,808 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3c1df9bdde90309b097a8fb8043a5f38, ASSIGN 2023-05-27 22:56:59,809 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=3c1df9bdde90309b097a8fb8043a5f38, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42231,1685228219175; forceNewPlan=false, retain=false 2023-05-27 22:56:59,960 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3c1df9bdde90309b097a8fb8043a5f38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:56:59,960 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228219960"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228219960"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228219960"}]},"ts":"1685228219960"} 2023-05-27 22:56:59,962 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 3c1df9bdde90309b097a8fb8043a5f38, server=jenkins-hbase4.apache.org,42231,1685228219175}] 2023-05-27 22:57:00,120 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:00,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3c1df9bdde90309b097a8fb8043a5f38, NAME => 'hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:57:00,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:00,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,122 INFO [StoreOpener-3c1df9bdde90309b097a8fb8043a5f38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,123 DEBUG [StoreOpener-3c1df9bdde90309b097a8fb8043a5f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38/info 2023-05-27 22:57:00,124 DEBUG [StoreOpener-3c1df9bdde90309b097a8fb8043a5f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38/info 2023-05-27 22:57:00,124 INFO [StoreOpener-3c1df9bdde90309b097a8fb8043a5f38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3c1df9bdde90309b097a8fb8043a5f38 columnFamilyName info 2023-05-27 22:57:00,125 INFO [StoreOpener-3c1df9bdde90309b097a8fb8043a5f38-1] regionserver.HStore(310): Store=3c1df9bdde90309b097a8fb8043a5f38/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:00,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,130 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:00,133 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/hbase/namespace/3c1df9bdde90309b097a8fb8043a5f38/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:57:00,133 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3c1df9bdde90309b097a8fb8043a5f38; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=832534, jitterRate=0.05862298607826233}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:57:00,133 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3c1df9bdde90309b097a8fb8043a5f38: 2023-05-27 22:57:00,135 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38., pid=6, masterSystemTime=1685228220115 2023-05-27 22:57:00,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:00,137 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:00,138 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3c1df9bdde90309b097a8fb8043a5f38, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:00,139 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228220138"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228220138"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228220138"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228220138"}]},"ts":"1685228220138"} 2023-05-27 22:57:00,144 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 22:57:00,144 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 3c1df9bdde90309b097a8fb8043a5f38, server=jenkins-hbase4.apache.org,42231,1685228219175 in 179 msec 2023-05-27 22:57:00,147 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 22:57:00,147 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=3c1df9bdde90309b097a8fb8043a5f38, ASSIGN in 338 msec 2023-05-27 22:57:00,148 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:57:00,148 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228220148"}]},"ts":"1685228220148"} 2023-05-27 22:57:00,150 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 22:57:00,153 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:57:00,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 391 msec 2023-05-27 22:57:00,164 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 22:57:00,166 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:57:00,166 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:00,171 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 22:57:00,179 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:57:00,182 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-05-27 22:57:00,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 22:57:00,200 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:57:00,205 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-27 22:57:00,216 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 22:57:00,219 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 22:57:00,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.022sec 2023-05-27 22:57:00,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 22:57:00,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 22:57:00,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 22:57:00,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44839,1685228219122-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 22:57:00,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44839,1685228219122-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 22:57:00,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 22:57:00,290 DEBUG [Listener at localhost/38643] zookeeper.ReadOnlyZKClient(139): Connect 0x632e67ea to 127.0.0.1:53199 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:57:00,294 DEBUG [Listener at localhost/38643] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@376e5b58, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:57:00,296 DEBUG [hconnection-0xa471768-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:57:00,298 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39528, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:57:00,300 INFO [Listener at localhost/38643] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:57:00,301 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:00,304 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 22:57:00,304 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:00,305 INFO [Listener at localhost/38643] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:57:00,317 INFO [Listener at localhost/38643] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:57:00,319 INFO [Listener at localhost/38643] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36195 2023-05-27 22:57:00,319 INFO [Listener at localhost/38643] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 22:57:00,320 DEBUG [Listener at localhost/38643] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 22:57:00,320 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:00,321 INFO [Listener at localhost/38643] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:00,322 INFO [Listener at localhost/38643] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36195 connecting to ZooKeeper ensemble=127.0.0.1:53199 2023-05-27 22:57:00,326 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:361950x0, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:57:00,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36195-0x1006edc6de00005 connected 2023-05-27 22:57:00,327 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(162): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:57:00,328 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(162): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-27 22:57:00,329 DEBUG [Listener at localhost/38643] zookeeper.ZKUtil(164): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:57:00,332 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36195 2023-05-27 22:57:00,334 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36195 2023-05-27 22:57:00,338 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36195 2023-05-27 22:57:00,338 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36195 2023-05-27 22:57:00,340 DEBUG [Listener at localhost/38643] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36195 2023-05-27 22:57:00,344 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(951): ClusterId : 34945865-3213-473b-a000-bdce83b17d14 2023-05-27 22:57:00,345 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 22:57:00,348 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 22:57:00,348 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 22:57:00,351 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 22:57:00,352 DEBUG [RS:1;jenkins-hbase4:36195] zookeeper.ReadOnlyZKClient(139): Connect 0x70967ca0 to 127.0.0.1:53199 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:57:00,359 DEBUG [RS:1;jenkins-hbase4:36195] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e518cc2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:57:00,360 DEBUG [RS:1;jenkins-hbase4:36195] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a6e17d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:57:00,368 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:36195 2023-05-27 22:57:00,369 INFO [RS:1;jenkins-hbase4:36195] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 22:57:00,369 INFO [RS:1;jenkins-hbase4:36195] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 22:57:00,369 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 22:57:00,370 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44839,1685228219122 with isa=jenkins-hbase4.apache.org/172.31.14.131:36195, startcode=1685228220316 2023-05-27 22:57:00,370 DEBUG [RS:1;jenkins-hbase4:36195] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 22:57:00,372 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59801, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 22:57:00,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,373 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd 2023-05-27 22:57:00,373 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44813 2023-05-27 22:57:00,373 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 22:57:00,381 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:00,381 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:00,381 DEBUG [RS:1;jenkins-hbase4:36195] zookeeper.ZKUtil(162): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,381 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36195,1685228220316] 2023-05-27 22:57:00,381 WARN [RS:1;jenkins-hbase4:36195] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:57:00,382 INFO [RS:1;jenkins-hbase4:36195] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:57:00,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:00,382 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1946): logDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,386 DEBUG [RS:1;jenkins-hbase4:36195] zookeeper.ZKUtil(162): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:00,387 DEBUG [RS:1;jenkins-hbase4:36195] zookeeper.ZKUtil(162): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,388 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 22:57:00,388 INFO [RS:1;jenkins-hbase4:36195] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 22:57:00,390 INFO [RS:1;jenkins-hbase4:36195] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 22:57:00,395 INFO [RS:1;jenkins-hbase4:36195] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:57:00,396 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:00,396 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 22:57:00,397 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:00,397 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,397 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,397 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,397 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,398 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,398 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:57:00,398 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,398 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,398 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,398 DEBUG [RS:1;jenkins-hbase4:36195] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:00,399 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:00,399 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:00,399 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:00,410 INFO [RS:1;jenkins-hbase4:36195] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 22:57:00,410 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36195,1685228220316-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:00,421 INFO [RS:1;jenkins-hbase4:36195] regionserver.Replication(203): jenkins-hbase4.apache.org,36195,1685228220316 started 2023-05-27 22:57:00,421 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36195,1685228220316, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36195, sessionid=0x1006edc6de00005 2023-05-27 22:57:00,421 INFO [Listener at localhost/38643] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:36195,5,FailOnTimeoutGroup] 2023-05-27 22:57:00,421 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 22:57:00,421 INFO [Listener at localhost/38643] wal.TestLogRolling(323): Replication=2 2023-05-27 22:57:00,421 DEBUG [RS:1;jenkins-hbase4:36195] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,422 DEBUG [RS:1;jenkins-hbase4:36195] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36195,1685228220316' 2023-05-27 22:57:00,422 DEBUG [RS:1;jenkins-hbase4:36195] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:57:00,422 DEBUG [RS:1;jenkins-hbase4:36195] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:57:00,423 DEBUG [Listener at localhost/38643] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 22:57:00,424 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 22:57:00,424 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 22:57:00,424 DEBUG [RS:1;jenkins-hbase4:36195] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,424 DEBUG [RS:1;jenkins-hbase4:36195] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36195,1685228220316' 2023-05-27 22:57:00,424 DEBUG [RS:1;jenkins-hbase4:36195] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 22:57:00,425 DEBUG [RS:1;jenkins-hbase4:36195] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 22:57:00,425 DEBUG [RS:1;jenkins-hbase4:36195] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 22:57:00,425 INFO [RS:1;jenkins-hbase4:36195] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 22:57:00,426 INFO [RS:1;jenkins-hbase4:36195] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 22:57:00,427 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60652, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 22:57:00,429 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 22:57:00,429 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 22:57:00,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:57:00,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-27 22:57:00,432 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:57:00,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-27 22:57:00,433 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:57:00,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:57:00,435 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,436 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c empty. 2023-05-27 22:57:00,436 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,436 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-27 22:57:00,452 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-27 22:57:00,455 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc70acfd85b964308e922cfb097a3b0c, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/.tmp 2023-05-27 22:57:00,469 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:00,469 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing cc70acfd85b964308e922cfb097a3b0c, disabling compactions & flushes 2023-05-27 22:57:00,469 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,469 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,469 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. after waiting 0 ms 2023-05-27 22:57:00,470 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,470 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,470 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for cc70acfd85b964308e922cfb097a3b0c: 2023-05-27 22:57:00,473 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:57:00,474 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685228220474"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228220474"}]},"ts":"1685228220474"} 2023-05-27 22:57:00,476 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:57:00,478 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:57:00,478 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228220478"}]},"ts":"1685228220478"} 2023-05-27 22:57:00,479 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-27 22:57:00,486 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-05-27 22:57:00,488 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-27 22:57:00,488 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-27 22:57:00,488 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-27 22:57:00,488 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=cc70acfd85b964308e922cfb097a3b0c, ASSIGN}] 2023-05-27 22:57:00,490 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=cc70acfd85b964308e922cfb097a3b0c, ASSIGN 2023-05-27 22:57:00,491 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=cc70acfd85b964308e922cfb097a3b0c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36195,1685228220316; forceNewPlan=false, retain=false 2023-05-27 22:57:00,528 INFO [RS:1;jenkins-hbase4:36195] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36195%2C1685228220316, suffix=, logDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316, archiveDir=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/oldWALs, maxLogs=32 2023-05-27 22:57:00,538 INFO [RS:1;jenkins-hbase4:36195] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529 2023-05-27 22:57:00,538 DEBUG [RS:1;jenkins-hbase4:36195] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK], DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]] 2023-05-27 22:57:00,643 INFO [jenkins-hbase4:44839] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-27 22:57:00,644 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=cc70acfd85b964308e922cfb097a3b0c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,645 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685228220644"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228220644"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228220644"}]},"ts":"1685228220644"} 2023-05-27 22:57:00,647 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure cc70acfd85b964308e922cfb097a3b0c, server=jenkins-hbase4.apache.org,36195,1685228220316}] 2023-05-27 22:57:00,801 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,801 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 22:57:00,803 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55854, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 22:57:00,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc70acfd85b964308e922cfb097a3b0c, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:57:00,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:00,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,810 INFO [StoreOpener-cc70acfd85b964308e922cfb097a3b0c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,812 DEBUG [StoreOpener-cc70acfd85b964308e922cfb097a3b0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info 2023-05-27 22:57:00,812 DEBUG [StoreOpener-cc70acfd85b964308e922cfb097a3b0c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info 2023-05-27 22:57:00,812 INFO [StoreOpener-cc70acfd85b964308e922cfb097a3b0c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc70acfd85b964308e922cfb097a3b0c columnFamilyName info 2023-05-27 22:57:00,813 INFO [StoreOpener-cc70acfd85b964308e922cfb097a3b0c-1] regionserver.HStore(310): Store=cc70acfd85b964308e922cfb097a3b0c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:00,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,817 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:00,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:57:00,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc70acfd85b964308e922cfb097a3b0c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=759188, jitterRate=-0.03464265167713165}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:57:00,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc70acfd85b964308e922cfb097a3b0c: 2023-05-27 22:57:00,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c., pid=11, masterSystemTime=1685228220801 2023-05-27 22:57:00,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,825 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:00,826 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=cc70acfd85b964308e922cfb097a3b0c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:00,827 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685228220826"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228220826"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228220826"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228220826"}]},"ts":"1685228220826"} 2023-05-27 22:57:00,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 22:57:00,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure cc70acfd85b964308e922cfb097a3b0c, server=jenkins-hbase4.apache.org,36195,1685228220316 in 182 msec 2023-05-27 22:57:00,834 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 22:57:00,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=cc70acfd85b964308e922cfb097a3b0c, ASSIGN in 344 msec 2023-05-27 22:57:00,835 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:57:00,836 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228220835"}]},"ts":"1685228220835"} 2023-05-27 22:57:00,837 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-27 22:57:00,839 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:57:00,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 411 msec 2023-05-27 22:57:02,936 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 22:57:05,423 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 22:57:05,424 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 22:57:06,388 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-27 22:57:10,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:57:10,435 INFO [Listener at localhost/38643] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-27 22:57:10,438 DEBUG [Listener at localhost/38643] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-27 22:57:10,438 DEBUG [Listener at localhost/38643] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:10,451 WARN [Listener at localhost/38643] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:57:10,454 WARN [Listener at localhost/38643] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:10,455 INFO [Listener at localhost/38643] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:10,459 INFO [Listener at localhost/38643] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_45363_datanode____.j04mmi/webapp 2023-05-27 22:57:10,550 INFO [Listener at localhost/38643] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45363 2023-05-27 22:57:10,559 WARN [Listener at localhost/43889] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:10,578 WARN [Listener at localhost/43889] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:57:10,582 WARN [Listener at localhost/43889] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:10,583 INFO [Listener at localhost/43889] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:10,590 INFO [Listener at localhost/43889] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_37539_datanode____.cfrw9u/webapp 2023-05-27 22:57:10,668 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb198790d89ae2377: Processing first storage report for DS-3a5de106-33dc-4f9a-bb06-9554e743950a from datanode 2642c8ea-3a04-4b3a-8ee8-d2ce3e261dbc 2023-05-27 22:57:10,668 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb198790d89ae2377: from storage DS-3a5de106-33dc-4f9a-bb06-9554e743950a node DatanodeRegistration(127.0.0.1:46503, datanodeUuid=2642c8ea-3a04-4b3a-8ee8-d2ce3e261dbc, infoPort=34363, infoSecurePort=0, ipcPort=43889, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 22:57:10,668 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb198790d89ae2377: Processing first storage report for DS-c8e0fa80-3001-4fe7-8bcb-9f23bcc2bf21 from datanode 2642c8ea-3a04-4b3a-8ee8-d2ce3e261dbc 2023-05-27 22:57:10,668 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb198790d89ae2377: from storage DS-c8e0fa80-3001-4fe7-8bcb-9f23bcc2bf21 node DatanodeRegistration(127.0.0.1:46503, datanodeUuid=2642c8ea-3a04-4b3a-8ee8-d2ce3e261dbc, infoPort=34363, infoSecurePort=0, ipcPort=43889, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:10,697 INFO [Listener at localhost/43889] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37539 2023-05-27 22:57:10,708 WARN [Listener at localhost/36655] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:10,723 WARN [Listener at localhost/36655] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:57:10,725 WARN [Listener at localhost/36655] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:10,726 INFO [Listener at localhost/36655] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:10,729 INFO [Listener at localhost/36655] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_38683_datanode____.8n1uy5/webapp 2023-05-27 22:57:10,800 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x833e3188063b17fa: Processing first storage report for DS-ae29f8b7-6ef7-430d-b497-7d520e019952 from datanode 52f10666-6e66-4d43-a7b8-b489ccc09b00 2023-05-27 22:57:10,800 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x833e3188063b17fa: from storage DS-ae29f8b7-6ef7-430d-b497-7d520e019952 node DatanodeRegistration(127.0.0.1:39903, datanodeUuid=52f10666-6e66-4d43-a7b8-b489ccc09b00, infoPort=36553, infoSecurePort=0, ipcPort=36655, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:10,800 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x833e3188063b17fa: Processing first storage report for DS-b5a37f68-0655-4617-9a15-478a44de3f3c from datanode 52f10666-6e66-4d43-a7b8-b489ccc09b00 2023-05-27 22:57:10,800 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x833e3188063b17fa: from storage DS-b5a37f68-0655-4617-9a15-478a44de3f3c node DatanodeRegistration(127.0.0.1:39903, datanodeUuid=52f10666-6e66-4d43-a7b8-b489ccc09b00, infoPort=36553, infoSecurePort=0, ipcPort=36655, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:10,829 INFO [Listener at localhost/36655] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38683 2023-05-27 22:57:10,837 WARN [Listener at localhost/43647] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:10,936 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa391772ecfe143c2: Processing first storage report for DS-98cef918-d1e2-488d-95e5-d610a8772c97 from datanode 3e81fd00-b6ae-4479-aead-39ca28fb7a72 2023-05-27 22:57:10,937 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa391772ecfe143c2: from storage DS-98cef918-d1e2-488d-95e5-d610a8772c97 node DatanodeRegistration(127.0.0.1:35315, datanodeUuid=3e81fd00-b6ae-4479-aead-39ca28fb7a72, infoPort=35303, infoSecurePort=0, ipcPort=43647, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:10,937 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa391772ecfe143c2: Processing first storage report for DS-18f85194-5e51-4668-a9de-3ec0b6f3e8f2 from datanode 3e81fd00-b6ae-4479-aead-39ca28fb7a72 2023-05-27 22:57:10,937 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa391772ecfe143c2: from storage DS-18f85194-5e51-4668-a9de-3ec0b6f3e8f2 node DatanodeRegistration(127.0.0.1:35315, datanodeUuid=3e81fd00-b6ae-4479-aead-39ca28fb7a72, infoPort=35303, infoSecurePort=0, ipcPort=43647, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:10,942 WARN [Listener at localhost/43647] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:57:10,943 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:10,946 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 22:57:10,946 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 22:57:10,947 WARN [DataStreamer for file /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta.1685228219705.meta block BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK], DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]) is bad. 2023-05-27 22:57:10,947 WARN [PacketResponder: BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38065]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,947 WARN [PacketResponder: BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38065]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,947 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-27 22:57:10,945 WARN [DataStreamer for file /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122/jenkins-hbase4.apache.org%2C44839%2C1685228219122.1685228219257 block BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK], DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]) is bad. 2023-05-27 22:57:10,947 WARN [DataStreamer for file /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.1685228219565 block BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK], DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]) is bad. 2023-05-27 22:57:10,955 WARN [PacketResponder: BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38065]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,955 WARN [DataStreamer for file /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529 block BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK], DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]) is bad. 2023-05-27 22:57:10,961 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:40380 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40380 dst: /127.0.0.1:34645 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,961 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:40372 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40372 dst: /127.0.0.1:34645 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,965 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:50480 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50480 dst: /127.0.0.1:34645 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,967 INFO [Listener at localhost/43647] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:10,967 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:40340 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40340 dst: /127.0.0.1:34645 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34645 remote=/127.0.0.1:40340]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,970 WARN [PacketResponder: BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34645]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:10,972 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:35488 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:38065:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35488 dst: /127.0.0.1:38065 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,072 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:35498 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:38065:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35498 dst: /127.0.0.1:38065 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,072 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2128736743-172.31.14.131-1685228218376 (Datanode Uuid ad6b7259-200f-4fc4-8e40-263bca51be38) service to localhost/127.0.0.1:44813 2023-05-27 22:57:11,073 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:53880 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:38065:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53880 dst: /127.0.0.1:38065 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,073 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:35506 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:38065:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35506 dst: /127.0.0.1:38065 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,076 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data3/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:11,076 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data4/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:11,077 WARN [Listener at localhost/43647] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:57:11,077 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:11,077 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:11,078 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:11,077 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:11,091 INFO [Listener at localhost/43647] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:11,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:49724 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49724 dst: /127.0.0.1:34645 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:49692 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49692 dst: /127.0.0.1:34645 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,197 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:57:11,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:49710 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49710 dst: /127.0.0.1:34645 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:49698 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34645:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49698 dst: /127.0.0.1:34645 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:11,197 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2128736743-172.31.14.131-1685228218376 (Datanode Uuid 2fcaa603-6461-47f4-912e-80da2ed233ee) service to localhost/127.0.0.1:44813 2023-05-27 22:57:11,199 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data1/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:11,199 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data2/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:11,204 DEBUG [Listener at localhost/43647] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:57:11,206 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45558, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:57:11,207 WARN [RS:1;jenkins-hbase4:36195.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:11,208 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36195%2C1685228220316:(num 1685228220529) roll requested 2023-05-27 22:57:11,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36195] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:11,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36195] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:45558 deadline: 1685228241206, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-27 22:57:11,212 WARN [Thread-629] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741839_1019 2023-05-27 22:57:11,214 WARN [Thread-629] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] 2023-05-27 22:57:11,226 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-27 22:57:11,226 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529 with entries=1, filesize=466 B; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 2023-05-27 22:57:11,226 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK], DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK]] 2023-05-27 22:57:11,226 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529 is not closed yet, will try archiving it next time 2023-05-27 22:57:11,226 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:11,226 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:11,227 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529 to hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/oldWALs/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228220529 2023-05-27 22:57:23,310 INFO [Listener at localhost/43647] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 2023-05-27 22:57:23,311 WARN [Listener at localhost/43647] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:57:23,313 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:23,313 WARN [DataStreamer for file /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 block BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK], DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK]) is bad. 2023-05-27 22:57:23,318 INFO [Listener at localhost/43647] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:23,320 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:53146 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:39903:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53146 dst: /127.0.0.1:39903 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39903 remote=/127.0.0.1:53146]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:23,321 WARN [PacketResponder: BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39903]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:23,322 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:34532 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:46503:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34532 dst: /127.0.0.1:46503 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:23,426 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:57:23,426 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2128736743-172.31.14.131-1685228218376 (Datanode Uuid 2642c8ea-3a04-4b3a-8ee8-d2ce3e261dbc) service to localhost/127.0.0.1:44813 2023-05-27 22:57:23,427 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data5/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:23,427 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data6/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:23,433 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK]] 2023-05-27 22:57:23,433 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK]] 2023-05-27 22:57:23,433 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36195%2C1685228220316:(num 1685228231208) roll requested 2023-05-27 22:57:23,439 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:39100 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741841_1022]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data8/current]'}, localName='127.0.0.1:39903', datanodeUuid='52f10666-6e66-4d43-a7b8-b489ccc09b00', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741841_1022 to mirror 127.0.0.1:38065: java.net.ConnectException: Connection refused 2023-05-27 22:57:23,439 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741841_1022 2023-05-27 22:57:23,439 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:39100 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:39903:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39100 dst: /127.0.0.1:39903 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:23,440 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] 2023-05-27 22:57:23,441 WARN [Thread-639] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741842_1023 2023-05-27 22:57:23,442 WARN [Thread-639] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK] 2023-05-27 22:57:23,452 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228243433 2023-05-27 22:57:23,452 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK], DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:23,452 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 is not closed yet, will try archiving it next time 2023-05-27 22:57:25,815 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@d9dbd59] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:39903, datanodeUuid=52f10666-6e66-4d43-a7b8-b489ccc09b00, infoPort=36553, infoSecurePort=0, ipcPort=36655, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741840_1021 to 127.0.0.1:46503 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,438 WARN [Listener at localhost/43647] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:57:27,439 WARN [ResponseProcessor for block BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:57:27,440 WARN [DataStreamer for file /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228243433 block BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024] hdfs.DataStreamer(1548): Error Recovery for BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK], DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK]) is bad. 2023-05-27 22:57:27,443 INFO [Listener at localhost/43647] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:27,444 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38556 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:35315:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38556 dst: /127.0.0.1:35315 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35315 remote=/127.0.0.1:38556]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,444 WARN [PacketResponder: BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35315]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,445 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:39102 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:39903:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39102 dst: /127.0.0.1:39903 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,548 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:57:27,548 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2128736743-172.31.14.131-1685228218376 (Datanode Uuid 52f10666-6e66-4d43-a7b8-b489ccc09b00) service to localhost/127.0.0.1:44813 2023-05-27 22:57:27,549 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data7/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:27,549 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data8/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:27,553 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:27,553 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:27,553 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36195%2C1685228220316:(num 1685228243433) roll requested 2023-05-27 22:57:27,556 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741844_1026 2023-05-27 22:57:27,557 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK] 2023-05-27 22:57:27,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36195] regionserver.HRegion(9158): Flush requested on cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:27,559 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc70acfd85b964308e922cfb097a3b0c 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 22:57:27,559 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741845_1027 2023-05-27 22:57:27,560 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] 2023-05-27 22:57:27,561 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741846_1028 2023-05-27 22:57:27,567 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38570 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data10/current]'}, localName='127.0.0.1:35315', datanodeUuid='3e81fd00-b6ae-4479-aead-39ca28fb7a72', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741847_1029 to mirror 127.0.0.1:46503: java.net.ConnectException: Connection refused 2023-05-27 22:57:27,567 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741847_1029 2023-05-27 22:57:27,567 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:27,567 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38570 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:35315:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38570 dst: /127.0.0.1:35315 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,568 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK] 2023-05-27 22:57:27,569 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741848_1030 2023-05-27 22:57:27,569 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741849_1031 2023-05-27 22:57:27,569 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK] 2023-05-27 22:57:27,569 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK] 2023-05-27 22:57:27,570 WARN [IPC Server handler 0 on default port 44813] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-27 22:57:27,570 WARN [IPC Server handler 0 on default port 44813] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-27 22:57:27,570 WARN [IPC Server handler 0 on default port 44813] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-27 22:57:27,571 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38578 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741850_1032]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data10/current]'}, localName='127.0.0.1:35315', datanodeUuid='3e81fd00-b6ae-4479-aead-39ca28fb7a72', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741850_1032 to mirror 127.0.0.1:38065: java.net.ConnectException: Connection refused 2023-05-27 22:57:27,572 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741850_1032 2023-05-27 22:57:27,572 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38578 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741850_1032]] datanode.DataXceiver(323): 127.0.0.1:35315:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38578 dst: /127.0.0.1:35315 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,572 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] 2023-05-27 22:57:27,575 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38588 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741852_1034]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data10/current]'}, localName='127.0.0.1:35315', datanodeUuid='3e81fd00-b6ae-4479-aead-39ca28fb7a72', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741852_1034 to mirror 127.0.0.1:39903: java.net.ConnectException: Connection refused 2023-05-27 22:57:27,576 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741852_1034 2023-05-27 22:57:27,576 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38588 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741852_1034]] datanode.DataXceiver(323): 127.0.0.1:35315:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38588 dst: /127.0.0.1:35315 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,576 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:27,576 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228243433 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247553 2023-05-27 22:57:27,576 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:27,577 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228243433 is not closed yet, will try archiving it next time 2023-05-27 22:57:27,577 WARN [IPC Server handler 0 on default port 44813] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-27 22:57:27,578 WARN [IPC Server handler 0 on default port 44813] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-27 22:57:27,578 WARN [IPC Server handler 0 on default port 44813] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-27 22:57:27,772 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:27,772 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:27,772 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C36195%2C1685228220316:(num 1685228247553) roll requested 2023-05-27 22:57:27,775 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741854_1036 2023-05-27 22:57:27,776 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK] 2023-05-27 22:57:27,777 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741855_1037 2023-05-27 22:57:27,778 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK] 2023-05-27 22:57:27,781 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38608 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741856_1038]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data10/current]'}, localName='127.0.0.1:35315', datanodeUuid='3e81fd00-b6ae-4479-aead-39ca28fb7a72', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741856_1038 to mirror 127.0.0.1:39903: java.net.ConnectException: Connection refused 2023-05-27 22:57:27,781 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741856_1038 2023-05-27 22:57:27,781 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_596758519_17 at /127.0.0.1:38608 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741856_1038]] datanode.DataXceiver(323): 127.0.0.1:35315:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38608 dst: /127.0.0.1:35315 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:27,781 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:27,782 WARN [Thread-662] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741857_1039 2023-05-27 22:57:27,783 WARN [Thread-662] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38065,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK] 2023-05-27 22:57:27,784 WARN [IPC Server handler 3 on default port 44813] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-27 22:57:27,784 WARN [IPC Server handler 3 on default port 44813] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-27 22:57:27,784 WARN [IPC Server handler 3 on default port 44813] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-27 22:57:27,788 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247553 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247773 2023-05-27 22:57:27,789 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:27,789 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228243433 is not closed yet, will try archiving it next time 2023-05-27 22:57:27,789 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247553 is not closed yet, will try archiving it next time 2023-05-27 22:57:27,975 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-27 22:57:27,980 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247553 is not closed yet, will try archiving it next time 2023-05-27 22:57:27,982 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/.tmp/info/c1a0b8fa6e5b4bc3a30ac56f6d4dc006 2023-05-27 22:57:27,992 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/.tmp/info/c1a0b8fa6e5b4bc3a30ac56f6d4dc006 as hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/c1a0b8fa6e5b4bc3a30ac56f6d4dc006 2023-05-27 22:57:27,999 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/c1a0b8fa6e5b4bc3a30ac56f6d4dc006, entries=5, sequenceid=12, filesize=10.0 K 2023-05-27 22:57:28,000 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for cc70acfd85b964308e922cfb097a3b0c in 442ms, sequenceid=12, compaction requested=false 2023-05-27 22:57:28,000 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc70acfd85b964308e922cfb097a3b0c: 2023-05-27 22:57:28,181 WARN [Listener at localhost/43647] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:57:28,184 WARN [Listener at localhost/43647] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:28,185 INFO [Listener at localhost/43647] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:28,189 INFO [Listener at localhost/43647] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/java.io.tmpdir/Jetty_localhost_38073_datanode____.7t69zm/webapp 2023-05-27 22:57:28,191 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 to hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/oldWALs/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228231208 2023-05-27 22:57:28,280 INFO [Listener at localhost/43647] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38073 2023-05-27 22:57:28,289 WARN [Listener at localhost/42117] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:28,382 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x74d781f4dd91a810: Processing first storage report for DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237 from datanode ad6b7259-200f-4fc4-8e40-263bca51be38 2023-05-27 22:57:28,383 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x74d781f4dd91a810: from storage DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237 node DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 22:57:28,383 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x74d781f4dd91a810: Processing first storage report for DS-5eb20716-d264-4745-8874-af0128071b34 from datanode ad6b7259-200f-4fc4-8e40-263bca51be38 2023-05-27 22:57:28,383 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x74d781f4dd91a810: from storage DS-5eb20716-d264-4745-8874-af0128071b34 node DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:28,938 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@383500ef] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35315, datanodeUuid=3e81fd00-b6ae-4479-aead-39ca28fb7a72, infoPort=35303, infoSecurePort=0, ipcPort=43647, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741843_1025 to 127.0.0.1:39903 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:28,938 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@55004a34] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35315, datanodeUuid=3e81fd00-b6ae-4479-aead-39ca28fb7a72, infoPort=35303, infoSecurePort=0, ipcPort=43647, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741853_1035 to 127.0.0.1:39903 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:29,333 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:29,333 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44839%2C1685228219122:(num 1685228219257) roll requested 2023-05-27 22:57:29,338 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741859_1041 2023-05-27 22:57:29,338 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:29,339 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:29,340 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK] 2023-05-27 22:57:29,341 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741860_1042 2023-05-27 22:57:29,341 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:29,344 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:38638 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741861_1043]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data10/current]'}, localName='127.0.0.1:35315', datanodeUuid='3e81fd00-b6ae-4479-aead-39ca28fb7a72', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741861_1043 to mirror 127.0.0.1:46503: java.net.ConnectException: Connection refused 2023-05-27 22:57:29,344 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741861_1043 2023-05-27 22:57:29,344 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:38638 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741861_1043]] datanode.DataXceiver(323): 127.0.0.1:35315:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38638 dst: /127.0.0.1:35315 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:29,344 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46503,DS-3a5de106-33dc-4f9a-bb06-9554e743950a,DISK] 2023-05-27 22:57:29,355 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-27 22:57:29,355 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122/jenkins-hbase4.apache.org%2C44839%2C1685228219122.1685228219257 with entries=88, filesize=43.71 KB; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122/jenkins-hbase4.apache.org%2C44839%2C1685228219122.1685228249333 2023-05-27 22:57:29,356 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43305,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK], DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:29,356 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:29,357 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122/jenkins-hbase4.apache.org%2C44839%2C1685228219122.1685228219257 is not closed yet, will try archiving it next time 2023-05-27 22:57:29,357 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122/jenkins-hbase4.apache.org%2C44839%2C1685228219122.1685228219257; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:41,383 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@48a2a4b6] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741837_1013 to 127.0.0.1:39903 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:41,383 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2ebba4b6] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741835_1011 to 127.0.0.1:46503 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:42,383 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5a02935d] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741831_1007 to 127.0.0.1:46503 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:44,383 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@26de2c5a] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741828_1004 to 127.0.0.1:39903 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:44,383 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@55b3a223] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741826_1002 to 127.0.0.1:39903 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:46,828 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:44762 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741863_1045]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data4/current]'}, localName='127.0.0.1:43305', datanodeUuid='ad6b7259-200f-4fc4-8e40-263bca51be38', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741863_1045 to mirror 127.0.0.1:39903: java.net.ConnectException: Connection refused 2023-05-27 22:57:46,828 WARN [Thread-720] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741863_1045 2023-05-27 22:57:46,828 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_91149616_17 at /127.0.0.1:44762 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741863_1045]] datanode.DataXceiver(323): 127.0.0.1:43305:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44762 dst: /127.0.0.1:43305 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:46,828 WARN [Thread-720] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:46,837 INFO [Listener at localhost/42117] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247773 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228266824 2023-05-27 22:57:46,837 DEBUG [Listener at localhost/42117] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43305,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK], DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK]] 2023-05-27 22:57:46,837 DEBUG [Listener at localhost/42117] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316/jenkins-hbase4.apache.org%2C36195%2C1685228220316.1685228247773 is not closed yet, will try archiving it next time 2023-05-27 22:57:46,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36195] regionserver.HRegion(9158): Flush requested on cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:46,843 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing cc70acfd85b964308e922cfb097a3b0c 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-27 22:57:46,844 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-27 22:57:46,849 WARN [Thread-728] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741865_1047 2023-05-27 22:57:46,849 WARN [Thread-728] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:46,860 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 22:57:46,860 INFO [Listener at localhost/42117] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 22:57:46,860 DEBUG [Listener at localhost/42117] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x632e67ea to 127.0.0.1:53199 2023-05-27 22:57:46,860 DEBUG [Listener at localhost/42117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:46,861 DEBUG [Listener at localhost/42117] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 22:57:46,861 DEBUG [Listener at localhost/42117] util.JVMClusterUtil(257): Found active master hash=1036995216, stopped=false 2023-05-27 22:57:46,861 INFO [Listener at localhost/42117] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:57:46,863 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:57:46,863 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:57:46,863 INFO [Listener at localhost/42117] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 22:57:46,863 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:57:46,863 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:46,864 DEBUG [Listener at localhost/42117] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a2bb7ae to 127.0.0.1:53199 2023-05-27 22:57:46,864 DEBUG [Listener at localhost/42117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:46,864 INFO [Listener at localhost/42117] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,42231,1685228219175' ***** 2023-05-27 22:57:46,864 INFO [Listener at localhost/42117] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 22:57:46,864 INFO [Listener at localhost/42117] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36195,1685228220316' ***** 2023-05-27 22:57:46,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:57:46,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:57:46,864 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/.tmp/info/d1f07f3934bc45f8924507b446fa26b6 2023-05-27 22:57:46,864 INFO [RS:0;jenkins-hbase4:42231] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 22:57:46,864 INFO [Listener at localhost/42117] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 22:57:46,865 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 22:57:46,865 INFO [RS:0;jenkins-hbase4:42231] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 22:57:46,865 INFO [RS:0;jenkins-hbase4:42231] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 22:57:46,865 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(3303): Received CLOSE for 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:46,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:57:46,866 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:46,866 INFO [RS:1;jenkins-hbase4:36195] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 22:57:46,866 DEBUG [RS:0;jenkins-hbase4:42231] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x12046b34 to 127.0.0.1:53199 2023-05-27 22:57:46,867 DEBUG [RS:0;jenkins-hbase4:42231] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:46,867 INFO [RS:0;jenkins-hbase4:42231] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 22:57:46,867 INFO [RS:0;jenkins-hbase4:42231] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 22:57:46,867 INFO [RS:0;jenkins-hbase4:42231] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 22:57:46,867 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 22:57:46,867 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-27 22:57:46,867 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1478): Online Regions={3c1df9bdde90309b097a8fb8043a5f38=hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38., 1588230740=hbase:meta,,1.1588230740} 2023-05-27 22:57:46,867 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1504): Waiting on 1588230740, 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c1df9bdde90309b097a8fb8043a5f38, disabling compactions & flushes 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:57:46,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:46,875 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. after waiting 0 ms 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:46,875 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:57:46,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 3c1df9bdde90309b097a8fb8043a5f38 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 22:57:46,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:57:46,876 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-05-27 22:57:46,876 WARN [RS:0;jenkins-hbase4:42231.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,877 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c1df9bdde90309b097a8fb8043a5f38: 2023-05-27 22:57:46,877 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C42231%2C1685228219175:(num 1685228219565) roll requested 2023-05-27 22:57:46,878 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,42231,1685228219175: Unrecoverable exception while closing hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,878 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:57:46,878 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-27 22:57:46,878 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 22:57:46,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-27 22:57:46,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-27 22:57:46,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-27 22:57:46,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-27 22:57:46,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1090519040, "init": 513802240, "max": 2051014656, "used": 669930720 }, "NonHeapMemoryUsage": { "committed": 133062656, "init": 2555904, "max": -1, "used": 130542200 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-27 22:57:46,889 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:44782 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741867_1049]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data4/current]'}, localName='127.0.0.1:43305', datanodeUuid='ad6b7259-200f-4fc4-8e40-263bca51be38', xmitsInProgress=0}:Exception transfering block BP-2128736743-172.31.14.131-1685228218376:blk_1073741867_1049 to mirror 127.0.0.1:39903: java.net.ConnectException: Connection refused 2023-05-27 22:57:46,889 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-2128736743-172.31.14.131-1685228218376:blk_1073741867_1049 2023-05-27 22:57:46,890 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270591704_17 at /127.0.0.1:44782 [Receiving block BP-2128736743-172.31.14.131-1685228218376:blk_1073741867_1049]] datanode.DataXceiver(323): 127.0.0.1:43305:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:44782 dst: /127.0.0.1:43305 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:46,890 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/.tmp/info/d1f07f3934bc45f8924507b446fa26b6 as hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/d1f07f3934bc45f8924507b446fa26b6 2023-05-27 22:57:46,890 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39903,DS-ae29f8b7-6ef7-430d-b497-7d520e019952,DISK] 2023-05-27 22:57:46,894 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44839] master.MasterRpcServices(609): jenkins-hbase4.apache.org,42231,1685228219175 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,42231,1685228219175: Unrecoverable exception while closing hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,905 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-27 22:57:46,906 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.1685228219565 with entries=3, filesize=600 B; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.1685228266877 2023-05-27 22:57:46,907 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]] 2023-05-27 22:57:46,907 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,907 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.1685228219565 is not closed yet, will try archiving it next time 2023-05-27 22:57:46,908 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.1685228219565; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,908 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta:.meta(num 1685228219705) roll requested 2023-05-27 22:57:46,908 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/d1f07f3934bc45f8924507b446fa26b6, entries=8, sequenceid=25, filesize=13.2 K 2023-05-27 22:57:46,910 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for cc70acfd85b964308e922cfb097a3b0c in 67ms, sequenceid=25, compaction requested=false 2023-05-27 22:57:46,910 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for cc70acfd85b964308e922cfb097a3b0c: 2023-05-27 22:57:46,910 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-27 22:57:46,910 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 22:57:46,910 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/d1f07f3934bc45f8924507b446fa26b6 because midkey is the same as first or last row 2023-05-27 22:57:46,910 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 22:57:46,910 INFO [RS:1;jenkins-hbase4:36195] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 22:57:46,910 INFO [RS:1;jenkins-hbase4:36195] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 22:57:46,910 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(3303): Received CLOSE for cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:46,911 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:46,911 DEBUG [RS:1;jenkins-hbase4:36195] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x70967ca0 to 127.0.0.1:53199 2023-05-27 22:57:46,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc70acfd85b964308e922cfb097a3b0c, disabling compactions & flushes 2023-05-27 22:57:46,911 DEBUG [RS:1;jenkins-hbase4:36195] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:46,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:46,911 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-27 22:57:46,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:46,911 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1478): Online Regions={cc70acfd85b964308e922cfb097a3b0c=TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c.} 2023-05-27 22:57:46,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. after waiting 0 ms 2023-05-27 22:57:46,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:46,911 DEBUG [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1504): Waiting on cc70acfd85b964308e922cfb097a3b0c 2023-05-27 22:57:46,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cc70acfd85b964308e922cfb097a3b0c 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-27 22:57:46,919 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-27 22:57:46,919 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta.1685228219705.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta.1685228266908.meta 2023-05-27 22:57:46,921 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35315,DS-98cef918-d1e2-488d-95e5-d610a8772c97,DISK], DatanodeInfoWithStorage[127.0.0.1:43305,DS-cd4dd9f2-9785-449d-b4b7-fd846ca22237,DISK]] 2023-05-27 22:57:46,921 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,921 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta.1685228219705.meta is not closed yet, will try archiving it next time 2023-05-27 22:57:46,921 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175/jenkins-hbase4.apache.org%2C42231%2C1685228219175.meta.1685228219705.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34645,DS-6a8213fd-cd62-4a90-81df-bf520a89a643,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:57:46,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/.tmp/info/3ef8462c1c194c0fa61cd8e717d98bc8 2023-05-27 22:57:46,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/.tmp/info/3ef8462c1c194c0fa61cd8e717d98bc8 as hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/3ef8462c1c194c0fa61cd8e717d98bc8 2023-05-27 22:57:46,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/info/3ef8462c1c194c0fa61cd8e717d98bc8, entries=9, sequenceid=37, filesize=14.2 K 2023-05-27 22:57:46,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for cc70acfd85b964308e922cfb097a3b0c in 38ms, sequenceid=37, compaction requested=true 2023-05-27 22:57:46,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/data/default/TestLogRolling-testLogRollOnDatanodeDeath/cc70acfd85b964308e922cfb097a3b0c/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-05-27 22:57:46,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:46,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc70acfd85b964308e922cfb097a3b0c: 2023-05-27 22:57:46,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685228220428.cc70acfd85b964308e922cfb097a3b0c. 2023-05-27 22:57:47,068 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(3303): Received CLOSE for 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:47,068 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c1df9bdde90309b097a8fb8043a5f38, disabling compactions & flushes 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:57:47,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:47,068 DEBUG [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1504): Waiting on 1588230740, 3c1df9bdde90309b097a8fb8043a5f38 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:47,068 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. after waiting 0 ms 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:47,068 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:57:47,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:57:47,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c1df9bdde90309b097a8fb8043a5f38: 2023-05-27 22:57:47,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:57:47,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 22:57:47,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685228219762.3c1df9bdde90309b097a8fb8043a5f38. 2023-05-27 22:57:47,112 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36195,1685228220316; all regions closed. 2023-05-27 22:57:47,112 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:47,122 DEBUG [RS:1;jenkins-hbase4:36195] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/oldWALs 2023-05-27 22:57:47,122 INFO [RS:1;jenkins-hbase4:36195] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C36195%2C1685228220316:(num 1685228266824) 2023-05-27 22:57:47,122 DEBUG [RS:1;jenkins-hbase4:36195] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:47,122 INFO [RS:1;jenkins-hbase4:36195] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:57:47,122 INFO [RS:1;jenkins-hbase4:36195] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 22:57:47,122 INFO [RS:1;jenkins-hbase4:36195] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 22:57:47,122 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:57:47,122 INFO [RS:1;jenkins-hbase4:36195] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 22:57:47,123 INFO [RS:1;jenkins-hbase4:36195] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 22:57:47,123 INFO [RS:1;jenkins-hbase4:36195] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36195 2023-05-27 22:57:47,126 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:47,126 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:47,126 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36195,1685228220316 2023-05-27 22:57:47,126 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:47,126 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:47,128 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36195,1685228220316] 2023-05-27 22:57:47,128 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36195,1685228220316; numProcessing=1 2023-05-27 22:57:47,130 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36195,1685228220316 already deleted, retry=false 2023-05-27 22:57:47,130 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36195,1685228220316 expired; onlineServers=1 2023-05-27 22:57:47,263 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:57:47,263 INFO [RS:1;jenkins-hbase4:36195] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36195,1685228220316; zookeeper connection closed. 2023-05-27 22:57:47,263 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:36195-0x1006edc6de00005, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:57:47,264 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@70bbffc1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@70bbffc1 2023-05-27 22:57:47,268 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-27 22:57:47,268 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42231,1685228219175; all regions closed. 2023-05-27 22:57:47,269 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:47,274 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/WALs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:47,278 DEBUG [RS:0;jenkins-hbase4:42231] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:47,278 INFO [RS:0;jenkins-hbase4:42231] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:57:47,278 INFO [RS:0;jenkins-hbase4:42231] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-27 22:57:47,278 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:57:47,279 INFO [RS:0;jenkins-hbase4:42231] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42231 2023-05-27 22:57:47,281 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42231,1685228219175 2023-05-27 22:57:47,281 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:47,282 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42231,1685228219175] 2023-05-27 22:57:47,282 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42231,1685228219175; numProcessing=2 2023-05-27 22:57:47,283 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42231,1685228219175 already deleted, retry=false 2023-05-27 22:57:47,283 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42231,1685228219175 expired; onlineServers=0 2023-05-27 22:57:47,283 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44839,1685228219122' ***** 2023-05-27 22:57:47,283 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 22:57:47,284 DEBUG [M:0;jenkins-hbase4:44839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39f0d9f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:57:47,284 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:57:47,284 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44839,1685228219122; all regions closed. 2023-05-27 22:57:47,284 DEBUG [M:0;jenkins-hbase4:44839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:57:47,284 DEBUG [M:0;jenkins-hbase4:44839] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 22:57:47,284 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 22:57:47,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228219336] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228219336,5,FailOnTimeoutGroup] 2023-05-27 22:57:47,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228219336] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228219336,5,FailOnTimeoutGroup] 2023-05-27 22:57:47,284 DEBUG [M:0;jenkins-hbase4:44839] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 22:57:47,285 INFO [M:0;jenkins-hbase4:44839] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 22:57:47,285 INFO [M:0;jenkins-hbase4:44839] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 22:57:47,285 INFO [M:0;jenkins-hbase4:44839] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 22:57:47,285 DEBUG [M:0;jenkins-hbase4:44839] master.HMaster(1512): Stopping service threads 2023-05-27 22:57:47,286 INFO [M:0;jenkins-hbase4:44839] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 22:57:47,286 ERROR [M:0;jenkins-hbase4:44839] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 22:57:47,286 INFO [M:0;jenkins-hbase4:44839] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 22:57:47,286 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 22:57:47,291 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 22:57:47,291 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:47,291 DEBUG [M:0;jenkins-hbase4:44839] zookeeper.ZKUtil(398): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 22:57:47,291 WARN [M:0;jenkins-hbase4:44839] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 22:57:47,291 INFO [M:0;jenkins-hbase4:44839] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 22:57:47,291 INFO [M:0;jenkins-hbase4:44839] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 22:57:47,292 DEBUG [M:0;jenkins-hbase4:44839] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:57:47,292 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:57:47,292 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:47,292 DEBUG [M:0;jenkins-hbase4:44839] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:47,292 DEBUG [M:0;jenkins-hbase4:44839] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:57:47,292 DEBUG [M:0;jenkins-hbase4:44839] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:47,292 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.08 KB heapSize=45.73 KB 2023-05-27 22:57:47,307 INFO [M:0;jenkins-hbase4:44839] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.08 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b526721bf81c4fa7b95c1b07f3446dca 2023-05-27 22:57:47,313 DEBUG [M:0;jenkins-hbase4:44839] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b526721bf81c4fa7b95c1b07f3446dca as hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b526721bf81c4fa7b95c1b07f3446dca 2023-05-27 22:57:47,319 INFO [M:0;jenkins-hbase4:44839] regionserver.HStore(1080): Added hdfs://localhost:44813/user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b526721bf81c4fa7b95c1b07f3446dca, entries=11, sequenceid=92, filesize=7.0 K 2023-05-27 22:57:47,320 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegion(2948): Finished flush of dataSize ~38.08 KB/38997, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=92, compaction requested=false 2023-05-27 22:57:47,321 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:47,321 DEBUG [M:0;jenkins-hbase4:44839] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:57:47,322 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/20544278-e6b3-6224-8d11-7d05a35821bd/MasterData/WALs/jenkins-hbase4.apache.org,44839,1685228219122 2023-05-27 22:57:47,325 INFO [M:0;jenkins-hbase4:44839] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 22:57:47,325 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:57:47,325 INFO [M:0;jenkins-hbase4:44839] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44839 2023-05-27 22:57:47,327 DEBUG [M:0;jenkins-hbase4:44839] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44839,1685228219122 already deleted, retry=false 2023-05-27 22:57:47,382 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:57:47,382 INFO [RS:0;jenkins-hbase4:42231] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42231,1685228219175; zookeeper connection closed. 2023-05-27 22:57:47,383 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): regionserver:42231-0x1006edc6de00001, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:57:47,383 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@320ed944] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@320ed944 2023-05-27 22:57:47,384 INFO [Listener at localhost/42117] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-27 22:57:47,385 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1b655bb3] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43305, datanodeUuid=ad6b7259-200f-4fc4-8e40-263bca51be38, infoPort=45857, infoSecurePort=0, ipcPort=42117, storageInfo=lv=-57;cid=testClusterID;nsid=1609750335;c=1685228218376):Failed to transfer BP-2128736743-172.31.14.131-1685228218376:blk_1073741825_1001 to 127.0.0.1:39903 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:57:47,432 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:57:47,483 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:57:47,483 INFO [M:0;jenkins-hbase4:44839] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44839,1685228219122; zookeeper connection closed. 2023-05-27 22:57:47,483 DEBUG [Listener at localhost/38643-EventThread] zookeeper.ZKWatcher(600): master:44839-0x1006edc6de00000, quorum=127.0.0.1:53199, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:57:47,484 WARN [Listener at localhost/42117] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:57:47,488 INFO [Listener at localhost/42117] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:47,591 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:57:47,591 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2128736743-172.31.14.131-1685228218376 (Datanode Uuid ad6b7259-200f-4fc4-8e40-263bca51be38) service to localhost/127.0.0.1:44813 2023-05-27 22:57:47,592 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data3/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:47,592 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data4/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:47,594 WARN [Listener at localhost/42117] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:57:47,656 INFO [Listener at localhost/42117] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:47,759 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:57:47,759 WARN [BP-2128736743-172.31.14.131-1685228218376 heartbeating to localhost/127.0.0.1:44813] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2128736743-172.31.14.131-1685228218376 (Datanode Uuid 3e81fd00-b6ae-4479-aead-39ca28fb7a72) service to localhost/127.0.0.1:44813 2023-05-27 22:57:47,760 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data9/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:47,760 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/cluster_24fa0eba-0341-662b-7c3a-73a0de2833a3/dfs/data/data10/current/BP-2128736743-172.31.14.131-1685228218376] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:57:47,772 INFO [Listener at localhost/42117] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:57:47,894 INFO [Listener at localhost/42117] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 22:57:47,929 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 22:57:47,940 INFO [Listener at localhost/42117] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 52) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:44813 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:44813 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:44813 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42117 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:44813 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44813 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:44813 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1121440679) connection to localhost/127.0.0.1:44813 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=469 (was 442) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=64 (was 60) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 169), AvailableMemoryMB=3921 (was 4452) 2023-05-27 22:57:47,948 INFO [Listener at localhost/42117] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=469, MaxFileDescriptor=60000, SystemLoadAverage=64, ProcessCount=168, AvailableMemoryMB=3920 2023-05-27 22:57:47,949 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 22:57:47,949 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/hadoop.log.dir so I do NOT create it in target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a 2023-05-27 22:57:47,949 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/c51b45fd-50c1-1e85-13c0-ac88a02e9aab/hadoop.tmp.dir so I do NOT create it in target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a 2023-05-27 22:57:47,949 INFO [Listener at localhost/42117] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1, deleteOnExit=true 2023-05-27 22:57:47,949 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 22:57:47,949 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/test.cache.data in system properties and HBase conf 2023-05-27 22:57:47,950 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 22:57:47,950 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/hadoop.log.dir in system properties and HBase conf 2023-05-27 22:57:47,950 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 22:57:47,950 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 22:57:47,950 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 22:57:47,950 DEBUG [Listener at localhost/42117] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 22:57:47,951 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:57:47,951 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:57:47,951 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 22:57:47,951 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:57:47,952 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 22:57:47,952 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 22:57:47,952 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:57:47,952 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:57:47,952 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 22:57:47,953 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/nfs.dump.dir in system properties and HBase conf 2023-05-27 22:57:47,953 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir in system properties and HBase conf 2023-05-27 22:57:47,953 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:57:47,953 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 22:57:47,953 INFO [Listener at localhost/42117] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 22:57:47,955 WARN [Listener at localhost/42117] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:57:47,958 WARN [Listener at localhost/42117] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:57:47,958 WARN [Listener at localhost/42117] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:57:48,010 WARN [Listener at localhost/42117] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:48,013 INFO [Listener at localhost/42117] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:48,018 INFO [Listener at localhost/42117] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_37631_hdfs____.bjfbc6/webapp 2023-05-27 22:57:48,108 INFO [Listener at localhost/42117] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37631 2023-05-27 22:57:48,109 WARN [Listener at localhost/42117] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:57:48,112 WARN [Listener at localhost/42117] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:57:48,112 WARN [Listener at localhost/42117] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:57:48,149 WARN [Listener at localhost/44907] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:48,162 WARN [Listener at localhost/44907] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:57:48,165 WARN [Listener at localhost/44907] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:48,166 INFO [Listener at localhost/44907] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:48,170 INFO [Listener at localhost/44907] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_37115_datanode____.505s2w/webapp 2023-05-27 22:57:48,263 INFO [Listener at localhost/44907] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37115 2023-05-27 22:57:48,270 WARN [Listener at localhost/37331] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:48,285 WARN [Listener at localhost/37331] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:57:48,290 WARN [Listener at localhost/37331] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:57:48,291 INFO [Listener at localhost/37331] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:57:48,295 INFO [Listener at localhost/37331] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_39559_datanode____ozfd96/webapp 2023-05-27 22:57:48,370 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xefc06d1e187342ee: Processing first storage report for DS-ae029a28-be7d-4f56-bbda-9b0db11642c3 from datanode 76e4b823-c9a4-4983-84bf-c3e22be4be22 2023-05-27 22:57:48,370 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xefc06d1e187342ee: from storage DS-ae029a28-be7d-4f56-bbda-9b0db11642c3 node DatanodeRegistration(127.0.0.1:39873, datanodeUuid=76e4b823-c9a4-4983-84bf-c3e22be4be22, infoPort=46539, infoSecurePort=0, ipcPort=37331, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:48,370 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xefc06d1e187342ee: Processing first storage report for DS-ed590818-c1d5-44e3-b8b1-acc70066224b from datanode 76e4b823-c9a4-4983-84bf-c3e22be4be22 2023-05-27 22:57:48,370 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xefc06d1e187342ee: from storage DS-ed590818-c1d5-44e3-b8b1-acc70066224b node DatanodeRegistration(127.0.0.1:39873, datanodeUuid=76e4b823-c9a4-4983-84bf-c3e22be4be22, infoPort=46539, infoSecurePort=0, ipcPort=37331, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:48,395 INFO [Listener at localhost/37331] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39559 2023-05-27 22:57:48,401 WARN [Listener at localhost/46401] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:57:48,401 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:57:48,488 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x136da627e63595d1: Processing first storage report for DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5 from datanode e2024e87-a5d0-4488-9696-4dabd1ce4654 2023-05-27 22:57:48,488 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x136da627e63595d1: from storage DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5 node DatanodeRegistration(127.0.0.1:45375, datanodeUuid=e2024e87-a5d0-4488-9696-4dabd1ce4654, infoPort=44165, infoSecurePort=0, ipcPort=46401, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:48,488 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x136da627e63595d1: Processing first storage report for DS-f9d7bdaa-0968-4fe6-a521-44ee60035c12 from datanode e2024e87-a5d0-4488-9696-4dabd1ce4654 2023-05-27 22:57:48,488 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x136da627e63595d1: from storage DS-f9d7bdaa-0968-4fe6-a521-44ee60035c12 node DatanodeRegistration(127.0.0.1:45375, datanodeUuid=e2024e87-a5d0-4488-9696-4dabd1ce4654, infoPort=44165, infoSecurePort=0, ipcPort=46401, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:57:48,509 DEBUG [Listener at localhost/46401] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a 2023-05-27 22:57:48,511 INFO [Listener at localhost/46401] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/zookeeper_0, clientPort=54282, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 22:57:48,512 INFO [Listener at localhost/46401] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54282 2023-05-27 22:57:48,512 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,513 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,526 INFO [Listener at localhost/46401] util.FSUtils(471): Created version file at hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b with version=8 2023-05-27 22:57:48,526 INFO [Listener at localhost/46401] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase-staging 2023-05-27 22:57:48,527 INFO [Listener at localhost/46401] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:57:48,527 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:48,528 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:48,528 INFO [Listener at localhost/46401] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:57:48,528 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:48,528 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:57:48,528 INFO [Listener at localhost/46401] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:57:48,529 INFO [Listener at localhost/46401] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41601 2023-05-27 22:57:48,529 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,530 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,531 INFO [Listener at localhost/46401] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41601 connecting to ZooKeeper ensemble=127.0.0.1:54282 2023-05-27 22:57:48,539 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:416010x0, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:57:48,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41601-0x1006edd2ee10000 connected 2023-05-27 22:57:48,554 DEBUG [Listener at localhost/46401] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:57:48,554 DEBUG [Listener at localhost/46401] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:57:48,555 DEBUG [Listener at localhost/46401] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:57:48,555 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41601 2023-05-27 22:57:48,555 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41601 2023-05-27 22:57:48,555 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41601 2023-05-27 22:57:48,556 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41601 2023-05-27 22:57:48,556 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41601 2023-05-27 22:57:48,556 INFO [Listener at localhost/46401] master.HMaster(444): hbase.rootdir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b, hbase.cluster.distributed=false 2023-05-27 22:57:48,568 INFO [Listener at localhost/46401] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:57:48,568 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:48,569 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:48,569 INFO [Listener at localhost/46401] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:57:48,569 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:57:48,569 INFO [Listener at localhost/46401] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:57:48,569 INFO [Listener at localhost/46401] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:57:48,570 INFO [Listener at localhost/46401] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34323 2023-05-27 22:57:48,570 INFO [Listener at localhost/46401] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 22:57:48,571 DEBUG [Listener at localhost/46401] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 22:57:48,571 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,572 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,573 INFO [Listener at localhost/46401] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34323 connecting to ZooKeeper ensemble=127.0.0.1:54282 2023-05-27 22:57:48,575 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:343230x0, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:57:48,577 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34323-0x1006edd2ee10001 connected 2023-05-27 22:57:48,577 DEBUG [Listener at localhost/46401] zookeeper.ZKUtil(164): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:57:48,577 DEBUG [Listener at localhost/46401] zookeeper.ZKUtil(164): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:57:48,577 DEBUG [Listener at localhost/46401] zookeeper.ZKUtil(164): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:57:48,578 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34323 2023-05-27 22:57:48,578 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34323 2023-05-27 22:57:48,578 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34323 2023-05-27 22:57:48,579 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34323 2023-05-27 22:57:48,579 DEBUG [Listener at localhost/46401] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34323 2023-05-27 22:57:48,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,581 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:57:48,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,583 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:57:48,583 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,583 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:57:48,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:57:48,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41601,1685228268527 from backup master directory 2023-05-27 22:57:48,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:57:48,586 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,586 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:57:48,586 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:57:48,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/hbase.id with ID: 5b4ab86d-8bf6-455f-b7a5-f63fbf02ebdb 2023-05-27 22:57:48,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:48,611 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x61a6a9f8 to 127.0.0.1:54282 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:57:48,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@752de7c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:57:48,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:57:48,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 22:57:48,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:57:48,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store-tmp 2023-05-27 22:57:48,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:48,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:57:48,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:48,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:48,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:57:48,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:48,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:57:48,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:57:48,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41601%2C1685228268527, suffix=, logDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527, archiveDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/oldWALs, maxLogs=10 2023-05-27 22:57:48,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527/jenkins-hbase4.apache.org%2C41601%2C1685228268527.1685228268639 2023-05-27 22:57:48,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] 2023-05-27 22:57:48,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:57:48,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:48,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:57:48,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:57:48,652 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:57:48,654 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 22:57:48,654 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 22:57:48,655 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:48,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:57:48,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:57:48,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:57:48,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:57:48,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=873110, jitterRate=0.11021789908409119}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:57:48,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:57:48,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 22:57:48,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 22:57:48,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 22:57:48,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 22:57:48,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 22:57:48,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 22:57:48,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 22:57:48,665 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 22:57:48,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 22:57:48,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 22:57:48,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 22:57:48,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 22:57:48,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 22:57:48,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 22:57:48,686 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 22:57:48,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 22:57:48,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 22:57:48,690 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:57:48,690 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:57:48,690 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41601,1685228268527, sessionid=0x1006edd2ee10000, setting cluster-up flag (Was=false) 2023-05-27 22:57:48,694 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 22:57:48,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,703 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,707 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 22:57:48,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:48,708 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.hbase-snapshot/.tmp 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:57:48,712 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685228298713 2023-05-27 22:57:48,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 22:57:48,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 22:57:48,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 22:57:48,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 22:57:48,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 22:57:48,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 22:57:48,714 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 22:57:48,715 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:57:48,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 22:57:48,715 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 22:57:48,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 22:57:48,717 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:57:48,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 22:57:48,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 22:57:48,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228268718,5,FailOnTimeoutGroup] 2023-05-27 22:57:48,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228268719,5,FailOnTimeoutGroup] 2023-05-27 22:57:48,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 22:57:48,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,733 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:57:48,733 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:57:48,733 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b 2023-05-27 22:57:48,749 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:48,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:57:48,752 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/info 2023-05-27 22:57:48,753 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:57:48,753 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:48,753 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:57:48,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:57:48,755 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:57:48,756 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:48,756 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:57:48,757 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/table 2023-05-27 22:57:48,758 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:57:48,759 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:48,760 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740 2023-05-27 22:57:48,760 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740 2023-05-27 22:57:48,763 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:57:48,764 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:57:48,769 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:57:48,770 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=807882, jitterRate=0.02727581560611725}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:57:48,770 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:57:48,770 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:57:48,770 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:57:48,770 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:57:48,770 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:57:48,770 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:57:48,770 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:57:48,771 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:57:48,772 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:57:48,772 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 22:57:48,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 22:57:48,774 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 22:57:48,775 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 22:57:48,781 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(951): ClusterId : 5b4ab86d-8bf6-455f-b7a5-f63fbf02ebdb 2023-05-27 22:57:48,782 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 22:57:48,784 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 22:57:48,785 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 22:57:48,787 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 22:57:48,789 DEBUG [RS:0;jenkins-hbase4:34323] zookeeper.ReadOnlyZKClient(139): Connect 0x0109acdb to 127.0.0.1:54282 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:57:48,792 DEBUG [RS:0;jenkins-hbase4:34323] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5363822, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:57:48,793 DEBUG [RS:0;jenkins-hbase4:34323] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4034a6e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:57:48,806 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34323 2023-05-27 22:57:48,806 INFO [RS:0;jenkins-hbase4:34323] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 22:57:48,806 INFO [RS:0;jenkins-hbase4:34323] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 22:57:48,806 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 22:57:48,806 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,41601,1685228268527 with isa=jenkins-hbase4.apache.org/172.31.14.131:34323, startcode=1685228268568 2023-05-27 22:57:48,807 DEBUG [RS:0;jenkins-hbase4:34323] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 22:57:48,809 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57939, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 22:57:48,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:48,811 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b 2023-05-27 22:57:48,811 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44907 2023-05-27 22:57:48,811 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 22:57:48,813 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:57:48,813 DEBUG [RS:0;jenkins-hbase4:34323] zookeeper.ZKUtil(162): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:48,814 WARN [RS:0;jenkins-hbase4:34323] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:57:48,814 INFO [RS:0;jenkins-hbase4:34323] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:57:48,814 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1946): logDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:48,814 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34323,1685228268568] 2023-05-27 22:57:48,817 DEBUG [RS:0;jenkins-hbase4:34323] zookeeper.ZKUtil(162): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:48,818 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 22:57:48,818 INFO [RS:0;jenkins-hbase4:34323] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 22:57:48,820 INFO [RS:0;jenkins-hbase4:34323] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 22:57:48,821 INFO [RS:0;jenkins-hbase4:34323] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:57:48,821 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,821 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 22:57:48,822 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,822 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,822 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,822 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,822 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,822 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,822 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:57:48,823 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,823 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,823 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,823 DEBUG [RS:0;jenkins-hbase4:34323] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:57:48,824 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,824 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,824 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,835 INFO [RS:0;jenkins-hbase4:34323] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 22:57:48,835 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34323,1685228268568-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:48,846 INFO [RS:0;jenkins-hbase4:34323] regionserver.Replication(203): jenkins-hbase4.apache.org,34323,1685228268568 started 2023-05-27 22:57:48,846 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34323,1685228268568, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34323, sessionid=0x1006edd2ee10001 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34323,1685228268568' 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:48,847 DEBUG [RS:0;jenkins-hbase4:34323] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34323,1685228268568' 2023-05-27 22:57:48,848 DEBUG [RS:0;jenkins-hbase4:34323] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 22:57:48,848 DEBUG [RS:0;jenkins-hbase4:34323] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 22:57:48,848 DEBUG [RS:0;jenkins-hbase4:34323] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 22:57:48,848 INFO [RS:0;jenkins-hbase4:34323] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 22:57:48,848 INFO [RS:0;jenkins-hbase4:34323] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 22:57:48,925 DEBUG [jenkins-hbase4:41601] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 22:57:48,926 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34323,1685228268568, state=OPENING 2023-05-27 22:57:48,928 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 22:57:48,930 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:48,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34323,1685228268568}] 2023-05-27 22:57:48,931 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:57:48,950 INFO [RS:0;jenkins-hbase4:34323] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34323%2C1685228268568, suffix=, logDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568, archiveDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/oldWALs, maxLogs=32 2023-05-27 22:57:48,964 INFO [RS:0;jenkins-hbase4:34323] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 2023-05-27 22:57:48,964 DEBUG [RS:0;jenkins-hbase4:34323] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] 2023-05-27 22:57:49,086 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:49,086 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 22:57:49,089 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32788, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 22:57:49,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 22:57:49,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:57:49,095 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34323%2C1685228268568.meta, suffix=.meta, logDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568, archiveDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/oldWALs, maxLogs=32 2023-05-27 22:57:49,109 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.meta.1685228269097.meta 2023-05-27 22:57:49,109 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] 2023-05-27 22:57:49,109 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:57:49,110 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 22:57:49,110 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 22:57:49,110 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 22:57:49,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 22:57:49,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:49,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 22:57:49,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 22:57:49,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:57:49,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/info 2023-05-27 22:57:49,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/info 2023-05-27 22:57:49,116 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:57:49,116 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:49,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:57:49,118 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:57:49,118 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:57:49,118 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:57:49,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:49,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:57:49,120 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/table 2023-05-27 22:57:49,120 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740/table 2023-05-27 22:57:49,120 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:57:49,121 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:49,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740 2023-05-27 22:57:49,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/meta/1588230740 2023-05-27 22:57:49,125 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:57:49,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:57:49,128 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=739886, jitterRate=-0.05918644368648529}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:57:49,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:57:49,130 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685228269086 2023-05-27 22:57:49,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 22:57:49,134 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 22:57:49,135 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34323,1685228268568, state=OPEN 2023-05-27 22:57:49,137 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 22:57:49,137 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:57:49,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 22:57:49,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34323,1685228268568 in 206 msec 2023-05-27 22:57:49,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 22:57:49,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 367 msec 2023-05-27 22:57:49,144 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 434 msec 2023-05-27 22:57:49,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685228269144, completionTime=-1 2023-05-27 22:57:49,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 22:57:49,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 22:57:49,147 DEBUG [hconnection-0x26c49c23-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:57:49,149 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:57:49,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 22:57:49,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685228329151 2023-05-27 22:57:49,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685228389151 2023-05-27 22:57:49,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-27 22:57:49,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41601,1685228268527-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:49,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41601,1685228268527-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:49,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41601,1685228268527-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:49,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41601, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:49,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 22:57:49,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 22:57:49,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:57:49,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 22:57:49,159 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 22:57:49,161 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:57:49,162 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:57:49,163 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,164 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6 empty. 2023-05-27 22:57:49,164 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,164 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 22:57:49,178 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 22:57:49,179 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => db176b74bf6b0df8b876dca558df5ab6, NAME => 'hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp 2023-05-27 22:57:49,188 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:49,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing db176b74bf6b0df8b876dca558df5ab6, disabling compactions & flushes 2023-05-27 22:57:49,189 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. after waiting 0 ms 2023-05-27 22:57:49,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,189 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for db176b74bf6b0df8b876dca558df5ab6: 2023-05-27 22:57:49,191 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:57:49,192 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228269192"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228269192"}]},"ts":"1685228269192"} 2023-05-27 22:57:49,195 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:57:49,196 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:57:49,196 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228269196"}]},"ts":"1685228269196"} 2023-05-27 22:57:49,197 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 22:57:49,208 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=db176b74bf6b0df8b876dca558df5ab6, ASSIGN}] 2023-05-27 22:57:49,211 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=db176b74bf6b0df8b876dca558df5ab6, ASSIGN 2023-05-27 22:57:49,212 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=db176b74bf6b0df8b876dca558df5ab6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34323,1685228268568; forceNewPlan=false, retain=false 2023-05-27 22:57:49,363 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=db176b74bf6b0df8b876dca558df5ab6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:49,363 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228269363"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228269363"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228269363"}]},"ts":"1685228269363"} 2023-05-27 22:57:49,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure db176b74bf6b0df8b876dca558df5ab6, server=jenkins-hbase4.apache.org,34323,1685228268568}] 2023-05-27 22:57:49,521 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => db176b74bf6b0df8b876dca558df5ab6, NAME => 'hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:57:49,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:49,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,523 INFO [StoreOpener-db176b74bf6b0df8b876dca558df5ab6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,524 DEBUG [StoreOpener-db176b74bf6b0df8b876dca558df5ab6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6/info 2023-05-27 22:57:49,524 DEBUG [StoreOpener-db176b74bf6b0df8b876dca558df5ab6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6/info 2023-05-27 22:57:49,524 INFO [StoreOpener-db176b74bf6b0df8b876dca558df5ab6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region db176b74bf6b0df8b876dca558df5ab6 columnFamilyName info 2023-05-27 22:57:49,525 INFO [StoreOpener-db176b74bf6b0df8b876dca558df5ab6-1] regionserver.HStore(310): Store=db176b74bf6b0df8b876dca558df5ab6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:49,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:57:49,530 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/hbase/namespace/db176b74bf6b0df8b876dca558df5ab6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:57:49,531 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened db176b74bf6b0df8b876dca558df5ab6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=793908, jitterRate=0.00950653851032257}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:57:49,531 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for db176b74bf6b0df8b876dca558df5ab6: 2023-05-27 22:57:49,534 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6., pid=6, masterSystemTime=1685228269517 2023-05-27 22:57:49,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,536 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:57:49,537 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=db176b74bf6b0df8b876dca558df5ab6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:49,537 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228269537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228269537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228269537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228269537"}]},"ts":"1685228269537"} 2023-05-27 22:57:49,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 22:57:49,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure db176b74bf6b0df8b876dca558df5ab6, server=jenkins-hbase4.apache.org,34323,1685228268568 in 174 msec 2023-05-27 22:57:49,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 22:57:49,544 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=db176b74bf6b0df8b876dca558df5ab6, ASSIGN in 334 msec 2023-05-27 22:57:49,545 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:57:49,545 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228269545"}]},"ts":"1685228269545"} 2023-05-27 22:57:49,547 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 22:57:49,549 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:57:49,551 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 391 msec 2023-05-27 22:57:49,560 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 22:57:49,562 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:57:49,562 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:49,566 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 22:57:49,575 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:57:49,579 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-27 22:57:49,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 22:57:49,596 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:57:49,601 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-27 22:57:49,613 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 22:57:49,617 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 22:57:49,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.031sec 2023-05-27 22:57:49,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 22:57:49,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 22:57:49,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 22:57:49,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41601,1685228268527-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 22:57:49,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41601,1685228268527-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 22:57:49,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 22:57:49,681 DEBUG [Listener at localhost/46401] zookeeper.ReadOnlyZKClient(139): Connect 0x6ee8fcd6 to 127.0.0.1:54282 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:57:49,685 DEBUG [Listener at localhost/46401] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45e2e00c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:57:49,687 DEBUG [hconnection-0x699a95bf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:57:49,689 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32794, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:57:49,690 INFO [Listener at localhost/46401] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:57:49,690 INFO [Listener at localhost/46401] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:57:49,693 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 22:57:49,693 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:57:49,694 INFO [Listener at localhost/46401] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 22:57:49,694 INFO [Listener at localhost/46401] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-27 22:57:49,694 INFO [Listener at localhost/46401] wal.TestLogRolling(432): Replication=2 2023-05-27 22:57:49,696 DEBUG [Listener at localhost/46401] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 22:57:49,698 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45896, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 22:57:49,700 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 22:57:49,700 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 22:57:49,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:57:49,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-27 22:57:49,703 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:57:49,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-27 22:57:49,704 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:57:49,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:57:49,706 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:49,707 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e empty. 2023-05-27 22:57:49,707 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:49,707 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-27 22:57:49,717 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-27 22:57:49,718 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => f09cf586fbceb20004891939c6c6856e, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/.tmp 2023-05-27 22:57:49,725 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:49,725 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing f09cf586fbceb20004891939c6c6856e, disabling compactions & flushes 2023-05-27 22:57:49,725 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:49,725 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:49,725 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. after waiting 0 ms 2023-05-27 22:57:49,725 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:49,725 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:49,725 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for f09cf586fbceb20004891939c6c6856e: 2023-05-27 22:57:49,728 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:57:49,729 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685228269729"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228269729"}]},"ts":"1685228269729"} 2023-05-27 22:57:49,731 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:57:49,732 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:57:49,732 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228269732"}]},"ts":"1685228269732"} 2023-05-27 22:57:49,734 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-27 22:57:49,739 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f09cf586fbceb20004891939c6c6856e, ASSIGN}] 2023-05-27 22:57:49,741 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f09cf586fbceb20004891939c6c6856e, ASSIGN 2023-05-27 22:57:49,742 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f09cf586fbceb20004891939c6c6856e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34323,1685228268568; forceNewPlan=false, retain=false 2023-05-27 22:57:49,894 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f09cf586fbceb20004891939c6c6856e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:49,894 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685228269893"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228269893"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228269893"}]},"ts":"1685228269893"} 2023-05-27 22:57:49,896 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure f09cf586fbceb20004891939c6c6856e, server=jenkins-hbase4.apache.org,34323,1685228268568}] 2023-05-27 22:57:50,052 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:50,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f09cf586fbceb20004891939c6c6856e, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:57:50,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:57:50,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,054 INFO [StoreOpener-f09cf586fbceb20004891939c6c6856e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,056 DEBUG [StoreOpener-f09cf586fbceb20004891939c6c6856e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e/info 2023-05-27 22:57:50,056 DEBUG [StoreOpener-f09cf586fbceb20004891939c6c6856e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e/info 2023-05-27 22:57:50,056 INFO [StoreOpener-f09cf586fbceb20004891939c6c6856e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f09cf586fbceb20004891939c6c6856e columnFamilyName info 2023-05-27 22:57:50,057 INFO [StoreOpener-f09cf586fbceb20004891939c6c6856e-1] regionserver.HStore(310): Store=f09cf586fbceb20004891939c6c6856e/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:57:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f09cf586fbceb20004891939c6c6856e 2023-05-27 22:57:50,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/data/default/TestLogRolling-testLogRollOnPipelineRestart/f09cf586fbceb20004891939c6c6856e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:57:50,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f09cf586fbceb20004891939c6c6856e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=798735, jitterRate=0.015644580125808716}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:57:50,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f09cf586fbceb20004891939c6c6856e: 2023-05-27 22:57:50,064 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e., pid=11, masterSystemTime=1685228270049 2023-05-27 22:57:50,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:50,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:57:50,066 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f09cf586fbceb20004891939c6c6856e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:57:50,067 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685228270066"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228270066"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228270066"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228270066"}]},"ts":"1685228270066"} 2023-05-27 22:57:50,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 22:57:50,071 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure f09cf586fbceb20004891939c6c6856e, server=jenkins-hbase4.apache.org,34323,1685228268568 in 172 msec 2023-05-27 22:57:50,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 22:57:50,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=f09cf586fbceb20004891939c6c6856e, ASSIGN in 332 msec 2023-05-27 22:57:50,073 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:57:50,074 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228270074"}]},"ts":"1685228270074"} 2023-05-27 22:57:50,075 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-27 22:57:50,077 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:57:50,079 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 377 msec 2023-05-27 22:57:52,491 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 22:57:54,819 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-27 22:57:59,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:57:59,706 INFO [Listener at localhost/46401] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-27 22:57:59,709 DEBUG [Listener at localhost/46401] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-27 22:57:59,709 DEBUG [Listener at localhost/46401] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:01,714 INFO [Listener at localhost/46401] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 2023-05-27 22:58:01,715 WARN [Listener at localhost/46401] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:58:01,716 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:01,716 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:01,716 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:01,717 WARN [DataStreamer for file /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527/jenkins-hbase4.apache.org%2C41601%2C1685228268527.1685228268639 block BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]) is bad. 2023-05-27 22:58:01,717 WARN [DataStreamer for file /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 block BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]) is bad. 2023-05-27 22:58:01,717 WARN [DataStreamer for file /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.meta.1685228269097.meta block BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45375,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]) is bad. 2023-05-27 22:58:01,721 INFO [Listener at localhost/46401] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:01,723 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_172164188_17 at /127.0.0.1:57768 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39873:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57768 dst: /127.0.0.1:39873 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39873 remote=/127.0.0.1:57768]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,723 WARN [PacketResponder: BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39873]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,723 WARN [PacketResponder: BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39873]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,723 WARN [PacketResponder: BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39873]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,723 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:57782 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39873:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57782 dst: /127.0.0.1:39873 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39873 remote=/127.0.0.1:57782]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,723 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:57798 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39873:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57798 dst: /127.0.0.1:39873 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39873 remote=/127.0.0.1:57798]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,725 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:33296 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45375:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33296 dst: /127.0.0.1:45375 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,726 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_172164188_17 at /127.0.0.1:33250 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45375:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33250 dst: /127.0.0.1:45375 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,726 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:33282 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45375:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33282 dst: /127.0.0.1:45375 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:01,824 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:58:01,824 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1876548742-172.31.14.131-1685228267961 (Datanode Uuid e2024e87-a5d0-4488-9696-4dabd1ce4654) service to localhost/127.0.0.1:44907 2023-05-27 22:58:01,825 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data3/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:01,826 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:01,831 WARN [Listener at localhost/46401] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:58:01,834 WARN [Listener at localhost/46401] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:01,835 INFO [Listener at localhost/46401] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:01,839 INFO [Listener at localhost/46401] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_44673_datanode____.f5lu6f/webapp 2023-05-27 22:58:01,928 INFO [Listener at localhost/46401] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44673 2023-05-27 22:58:01,934 WARN [Listener at localhost/43575] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:01,938 WARN [Listener at localhost/43575] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:58:01,938 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:01,938 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:01,938 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:01,948 INFO [Listener at localhost/43575] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:02,006 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdccaeebd18dcde0c: Processing first storage report for DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5 from datanode e2024e87-a5d0-4488-9696-4dabd1ce4654 2023-05-27 22:58:02,006 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdccaeebd18dcde0c: from storage DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5 node DatanodeRegistration(127.0.0.1:45273, datanodeUuid=e2024e87-a5d0-4488-9696-4dabd1ce4654, infoPort=40463, infoSecurePort=0, ipcPort=43575, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:02,006 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdccaeebd18dcde0c: Processing first storage report for DS-f9d7bdaa-0968-4fe6-a521-44ee60035c12 from datanode e2024e87-a5d0-4488-9696-4dabd1ce4654 2023-05-27 22:58:02,006 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdccaeebd18dcde0c: from storage DS-f9d7bdaa-0968-4fe6-a521-44ee60035c12 node DatanodeRegistration(127.0.0.1:45273, datanodeUuid=e2024e87-a5d0-4488-9696-4dabd1ce4654, infoPort=40463, infoSecurePort=0, ipcPort=43575, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:02,052 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:43648 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39873:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43648 dst: /127.0.0.1:39873 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:02,054 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:58:02,052 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:43672 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39873:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43672 dst: /127.0.0.1:39873 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:02,052 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_172164188_17 at /127.0.0.1:43664 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39873:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43664 dst: /127.0.0.1:39873 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:02,054 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1876548742-172.31.14.131-1685228267961 (Datanode Uuid 76e4b823-c9a4-4983-84bf-c3e22be4be22) service to localhost/127.0.0.1:44907 2023-05-27 22:58:02,055 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data1/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:02,056 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data2/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:02,063 WARN [Listener at localhost/43575] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:58:02,065 WARN [Listener at localhost/43575] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:02,066 INFO [Listener at localhost/43575] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:02,071 INFO [Listener at localhost/43575] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_33145_datanode____3ooo3d/webapp 2023-05-27 22:58:02,165 INFO [Listener at localhost/43575] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33145 2023-05-27 22:58:02,171 WARN [Listener at localhost/45685] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:02,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x267d71a27c19ca59: Processing first storage report for DS-ae029a28-be7d-4f56-bbda-9b0db11642c3 from datanode 76e4b823-c9a4-4983-84bf-c3e22be4be22 2023-05-27 22:58:02,237 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x267d71a27c19ca59: from storage DS-ae029a28-be7d-4f56-bbda-9b0db11642c3 node DatanodeRegistration(127.0.0.1:45515, datanodeUuid=76e4b823-c9a4-4983-84bf-c3e22be4be22, infoPort=45403, infoSecurePort=0, ipcPort=45685, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 22:58:02,237 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x267d71a27c19ca59: Processing first storage report for DS-ed590818-c1d5-44e3-b8b1-acc70066224b from datanode 76e4b823-c9a4-4983-84bf-c3e22be4be22 2023-05-27 22:58:02,237 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x267d71a27c19ca59: from storage DS-ed590818-c1d5-44e3-b8b1-acc70066224b node DatanodeRegistration(127.0.0.1:45515, datanodeUuid=76e4b823-c9a4-4983-84bf-c3e22be4be22, infoPort=45403, infoSecurePort=0, ipcPort=45685, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:03,175 INFO [Listener at localhost/45685] wal.TestLogRolling(481): Data Nodes restarted 2023-05-27 22:58:03,177 INFO [Listener at localhost/45685] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-27 22:58:03,178 WARN [RS:0;jenkins-hbase4:34323.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:03,179 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C34323%2C1685228268568:(num 1685228268951) roll requested 2023-05-27 22:58:03,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34323] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:03,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34323] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:32794 deadline: 1685228293177, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-27 22:58:03,187 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 newFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 2023-05-27 22:58:03,188 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-27 22:58:03,188 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 2023-05-27 22:58:03,188 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45515,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK], DatanodeInfoWithStorage[127.0.0.1:45273,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]] 2023-05-27 22:58:03,188 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 is not closed yet, will try archiving it next time 2023-05-27 22:58:03,188 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:03,188 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:15,193 INFO [Listener at localhost/45685] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-27 22:58:17,195 WARN [Listener at localhost/45685] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:58:17,196 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:17,197 WARN [DataStreamer for file /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 block BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45515,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK], DatanodeInfoWithStorage[127.0.0.1:45273,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45515,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]) is bad. 2023-05-27 22:58:17,201 INFO [Listener at localhost/45685] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:17,201 WARN [PacketResponder: BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45273]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:17,201 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:52038 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:45273:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52038 dst: /127.0.0.1:45273 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45273 remote=/127.0.0.1:52038]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:17,203 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:54778 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:45515:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54778 dst: /127.0.0.1:45515 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:17,235 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1876548742-172.31.14.131-1685228267961 (Datanode Uuid 76e4b823-c9a4-4983-84bf-c3e22be4be22) service to localhost/127.0.0.1:44907 2023-05-27 22:58:17,236 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data1/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:17,236 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data2/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:17,312 WARN [Listener at localhost/45685] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:58:17,315 WARN [Listener at localhost/45685] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:17,316 INFO [Listener at localhost/45685] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:17,323 INFO [Listener at localhost/45685] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_44037_datanode____.8nh7gl/webapp 2023-05-27 22:58:17,414 INFO [Listener at localhost/45685] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44037 2023-05-27 22:58:17,423 WARN [Listener at localhost/45743] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:17,425 WARN [Listener at localhost/45743] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:58:17,426 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:17,431 INFO [Listener at localhost/45743] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:17,487 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a4747875c603244: Processing first storage report for DS-ae029a28-be7d-4f56-bbda-9b0db11642c3 from datanode 76e4b823-c9a4-4983-84bf-c3e22be4be22 2023-05-27 22:58:17,487 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a4747875c603244: from storage DS-ae029a28-be7d-4f56-bbda-9b0db11642c3 node DatanodeRegistration(127.0.0.1:41963, datanodeUuid=76e4b823-c9a4-4983-84bf-c3e22be4be22, infoPort=46005, infoSecurePort=0, ipcPort=45743, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:17,487 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a4747875c603244: Processing first storage report for DS-ed590818-c1d5-44e3-b8b1-acc70066224b from datanode 76e4b823-c9a4-4983-84bf-c3e22be4be22 2023-05-27 22:58:17,487 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a4747875c603244: from storage DS-ed590818-c1d5-44e3-b8b1-acc70066224b node DatanodeRegistration(127.0.0.1:41963, datanodeUuid=76e4b823-c9a4-4983-84bf-c3e22be4be22, infoPort=46005, infoSecurePort=0, ipcPort=45743, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:17,534 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-374446744_17 at /127.0.0.1:52006 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:45273:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52006 dst: /127.0.0.1:45273 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:17,536 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:58:17,536 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1876548742-172.31.14.131-1685228267961 (Datanode Uuid e2024e87-a5d0-4488-9696-4dabd1ce4654) service to localhost/127.0.0.1:44907 2023-05-27 22:58:17,537 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data3/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:17,537 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:17,543 WARN [Listener at localhost/45743] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:58:17,545 WARN [Listener at localhost/45743] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:17,546 INFO [Listener at localhost/45743] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:17,550 INFO [Listener at localhost/45743] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/java.io.tmpdir/Jetty_localhost_39207_datanode____1p3u2i/webapp 2023-05-27 22:58:17,640 INFO [Listener at localhost/45743] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39207 2023-05-27 22:58:17,647 WARN [Listener at localhost/35447] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:17,717 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x940114efe64699d8: Processing first storage report for DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5 from datanode e2024e87-a5d0-4488-9696-4dabd1ce4654 2023-05-27 22:58:17,717 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x940114efe64699d8: from storage DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5 node DatanodeRegistration(127.0.0.1:33653, datanodeUuid=e2024e87-a5d0-4488-9696-4dabd1ce4654, infoPort=32895, infoSecurePort=0, ipcPort=35447, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:17,717 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x940114efe64699d8: Processing first storage report for DS-f9d7bdaa-0968-4fe6-a521-44ee60035c12 from datanode e2024e87-a5d0-4488-9696-4dabd1ce4654 2023-05-27 22:58:17,717 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x940114efe64699d8: from storage DS-f9d7bdaa-0968-4fe6-a521-44ee60035c12 node DatanodeRegistration(127.0.0.1:33653, datanodeUuid=e2024e87-a5d0-4488-9696-4dabd1ce4654, infoPort=32895, infoSecurePort=0, ipcPort=35447, storageInfo=lv=-57;cid=testClusterID;nsid=1059905169;c=1685228267961), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:18,652 INFO [Listener at localhost/35447] wal.TestLogRolling(498): Data Nodes restarted 2023-05-27 22:58:18,654 INFO [Listener at localhost/35447] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-27 22:58:18,655 WARN [RS:0;jenkins-hbase4:34323.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45273,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,655 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C34323%2C1685228268568:(num 1685228283180) roll requested 2023-05-27 22:58:18,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34323] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45273,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34323] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:32794 deadline: 1685228308654, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-27 22:58:18,663 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 newFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 2023-05-27 22:58:18,663 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-27 22:58:18,664 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 2023-05-27 22:58:18,664 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41963,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK], DatanodeInfoWithStorage[127.0.0.1:33653,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]] 2023-05-27 22:58:18,664 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45273,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,664 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 is not closed yet, will try archiving it next time 2023-05-27 22:58:18,664 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:45273,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,714 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,714 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C41601%2C1685228268527:(num 1685228268639) roll requested 2023-05-27 22:58:18,714 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,715 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,722 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-27 22:58:18,722 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527/jenkins-hbase4.apache.org%2C41601%2C1685228268527.1685228268639 with entries=88, filesize=43.80 KB; new WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527/jenkins-hbase4.apache.org%2C41601%2C1685228268527.1685228298714 2023-05-27 22:58:18,722 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33653,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:41963,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] 2023-05-27 22:58:18,722 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527/jenkins-hbase4.apache.org%2C41601%2C1685228268527.1685228268639 is not closed yet, will try archiving it next time 2023-05-27 22:58:18,722 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:18,723 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527/jenkins-hbase4.apache.org%2C41601%2C1685228268527.1685228268639; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:30,685 DEBUG [Listener at localhost/35447] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 newFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 2023-05-27 22:58:30,686 INFO [Listener at localhost/35447] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 2023-05-27 22:58:30,690 DEBUG [Listener at localhost/35447] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33653,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:41963,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] 2023-05-27 22:58:30,690 DEBUG [Listener at localhost/35447] wal.AbstractFSWAL(716): hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 is not closed yet, will try archiving it next time 2023-05-27 22:58:30,690 DEBUG [Listener at localhost/35447] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 2023-05-27 22:58:30,691 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 2023-05-27 22:58:30,694 WARN [IPC Server handler 1 on default port 44907] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1014 2023-05-27 22:58:30,696 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 after 5ms 2023-05-27 22:58:31,509 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@53ef0146] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1876548742-172.31.14.131-1685228267961:blk_1073741832_1014, datanode=DatanodeInfoWithStorage[127.0.0.1:33653,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1014, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current/BP-1876548742-172.31.14.131-1685228267961/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1014, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current/BP-1876548742-172.31.14.131-1685228267961/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-27 22:58:34,697 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 after 4006ms 2023-05-27 22:58:34,697 DEBUG [Listener at localhost/35447] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228268951 2023-05-27 22:58:34,710 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685228269531/Put/vlen=175/seqid=0] 2023-05-27 22:58:34,711 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #4: [default/info:d/1685228269571/Put/vlen=9/seqid=0] 2023-05-27 22:58:34,711 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #5: [hbase/info:d/1685228269593/Put/vlen=7/seqid=0] 2023-05-27 22:58:34,711 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685228270063/Put/vlen=231/seqid=0] 2023-05-27 22:58:34,711 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #4: [row1002/info:/1685228279712/Put/vlen=1045/seqid=0] 2023-05-27 22:58:34,711 DEBUG [Listener at localhost/35447] wal.ProtobufLogReader(420): EOF at position 2160 2023-05-27 22:58:34,711 DEBUG [Listener at localhost/35447] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 2023-05-27 22:58:34,711 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 2023-05-27 22:58:34,712 WARN [IPC Server handler 3 on default port 44907] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-27 22:58:34,713 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 after 2ms 2023-05-27 22:58:35,490 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@29124ba] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1876548742-172.31.14.131-1685228267961:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:41963,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data1/current/BP-1876548742-172.31.14.131-1685228267961/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:38,713 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 after 4002ms 2023-05-27 22:58:38,713 DEBUG [Listener at localhost/35447] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228283180 2023-05-27 22:58:38,717 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #6: [row1003/info:/1685228293189/Put/vlen=1045/seqid=0] 2023-05-27 22:58:38,717 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #7: [row1004/info:/1685228295194/Put/vlen=1045/seqid=0] 2023-05-27 22:58:38,717 DEBUG [Listener at localhost/35447] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-27 22:58:38,717 DEBUG [Listener at localhost/35447] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 2023-05-27 22:58:38,717 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 2023-05-27 22:58:38,718 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 after 1ms 2023-05-27 22:58:38,718 DEBUG [Listener at localhost/35447] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228298655 2023-05-27 22:58:38,721 DEBUG [Listener at localhost/35447] wal.TestLogRolling(522): #9: [row1005/info:/1685228308674/Put/vlen=1045/seqid=0] 2023-05-27 22:58:38,721 DEBUG [Listener at localhost/35447] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 2023-05-27 22:58:38,721 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 2023-05-27 22:58:38,722 WARN [IPC Server handler 4 on default port 44907] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-27 22:58:38,722 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 after 1ms 2023-05-27 22:58:39,720 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_172164188_17 at /127.0.0.1:34516 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:33653:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34516 dst: /127.0.0.1:33653 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33653 remote=/127.0.0.1:34516]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:39,721 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_172164188_17 at /127.0.0.1:34090 [Receiving block BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:41963:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34090 dst: /127.0.0.1:41963 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:39,720 WARN [ResponseProcessor for block BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-27 22:58:39,721 WARN [DataStreamer for file /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 block BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33653,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK], DatanodeInfoWithStorage[127.0.0.1:41963,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33653,DS-55391d60-c599-41fe-9e4d-6bc824ee8bf5,DISK]) is bad. 2023-05-27 22:58:39,727 WARN [DataStreamer for file /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 block BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,722 INFO [Listener at localhost/35447] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 after 4001ms 2023-05-27 22:58:42,723 DEBUG [Listener at localhost/35447] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 2023-05-27 22:58:42,727 DEBUG [Listener at localhost/35447] wal.ProtobufLogReader(420): EOF at position 83 2023-05-27 22:58:42,728 INFO [Listener at localhost/35447] regionserver.HRegion(2745): Flushing db176b74bf6b0df8b876dca558df5ab6 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 22:58:42,729 WARN [RS:0;jenkins-hbase4:34323.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,729 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C34323%2C1685228268568:(num 1685228310676) roll requested 2023-05-27 22:58:42,729 DEBUG [Listener at localhost/35447] regionserver.HRegion(2446): Flush status journal for db176b74bf6b0df8b876dca558df5ab6: 2023-05-27 22:58:42,729 INFO [Listener at localhost/35447] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,731 INFO [Listener at localhost/35447] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-05-27 22:58:42,731 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,731 DEBUG [Listener at localhost/35447] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-27 22:58:42,732 INFO [Listener at localhost/35447] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,733 INFO [Listener at localhost/35447] regionserver.HRegion(2745): Flushing f09cf586fbceb20004891939c6c6856e 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-27 22:58:42,733 DEBUG [Listener at localhost/35447] regionserver.HRegion(2446): Flush status journal for f09cf586fbceb20004891939c6c6856e: 2023-05-27 22:58:42,733 INFO [Listener at localhost/35447] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,738 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 22:58:42,738 INFO [Listener at localhost/35447] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 22:58:42,738 DEBUG [Listener at localhost/35447] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ee8fcd6 to 127.0.0.1:54282 2023-05-27 22:58:42,738 DEBUG [Listener at localhost/35447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:58:42,739 DEBUG [Listener at localhost/35447] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 22:58:42,739 DEBUG [Listener at localhost/35447] util.JVMClusterUtil(257): Found active master hash=1279831875, stopped=false 2023-05-27 22:58:42,739 INFO [Listener at localhost/35447] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:58:42,743 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:58:42,743 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:58:42,743 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:42,743 INFO [Listener at localhost/35447] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 22:58:42,743 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:58:42,743 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:58:42,743 DEBUG [Listener at localhost/35447] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x61a6a9f8 to 127.0.0.1:54282 2023-05-27 22:58:42,744 DEBUG [Listener at localhost/35447] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:58:42,744 INFO [Listener at localhost/35447] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34323,1685228268568' ***** 2023-05-27 22:58:42,744 INFO [Listener at localhost/35447] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 22:58:42,744 INFO [RS:0;jenkins-hbase4:34323] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 22:58:42,745 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 22:58:42,745 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 newFile=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228322729 2023-05-27 22:58:42,745 INFO [RS:0;jenkins-hbase4:34323] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 22:58:42,745 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-27 22:58:42,745 INFO [RS:0;jenkins-hbase4:34323] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 22:58:42,745 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228322729 2023-05-27 22:58:42,745 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(3303): Received CLOSE for db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:58:42,745 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,745 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(3303): Received CLOSE for f09cf586fbceb20004891939c6c6856e 2023-05-27 22:58:42,745 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676 failed. Cause="Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-27 22:58:42,746 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:58:42,746 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,746 DEBUG [RS:0;jenkins-hbase4:34323] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0109acdb to 127.0.0.1:54282 2023-05-27 22:58:42,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing db176b74bf6b0df8b876dca558df5ab6, disabling compactions & flushes 2023-05-27 22:58:42,746 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568/jenkins-hbase4.apache.org%2C34323%2C1685228268568.1685228310676, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1876548742-172.31.14.131-1685228267961:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,746 DEBUG [RS:0;jenkins-hbase4:34323] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:58:42,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,746 INFO [RS:0;jenkins-hbase4:34323] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 22:58:42,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,746 INFO [RS:0;jenkins-hbase4:34323] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 22:58:42,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. after waiting 0 ms 2023-05-27 22:58:42,746 INFO [RS:0;jenkins-hbase4:34323] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 22:58:42,746 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:58:42,747 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 22:58:42,747 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:42,747 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-27 22:58:42,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,747 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1478): Online Regions={db176b74bf6b0df8b876dca558df5ab6=hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6., 1588230740=hbase:meta,,1.1588230740, f09cf586fbceb20004891939c6c6856e=TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e.} 2023-05-27 22:58:42,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing db176b74bf6b0df8b876dca558df5ab6 1/1 column families, dataSize=78 B heapSize=728 B 2023-05-27 22:58:42,747 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:58:42,748 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1504): Waiting on 1588230740, db176b74bf6b0df8b876dca558df5ab6, f09cf586fbceb20004891939c6c6856e 2023-05-27 22:58:42,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:58:42,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:58:42,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:58:42,748 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:58:42,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for db176b74bf6b0df8b876dca558df5ab6: 2023-05-27 22:58:42,748 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.95 KB 2023-05-27 22:58:42,748 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,34323,1685228268568: Unrecoverable exception while closing hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. ***** java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:42,748 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-27 22:58:42,748 WARN [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = hbase:meta,,1.1588230740 at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region hbase:meta,,1.1588230740 2023-05-27 22:58:42,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:58:42,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 22:58:42,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-27 22:58:42,751 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/WALs/jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:58:42,752 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,752 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39873,DS-ae029a28-be7d-4f56-bbda-9b0db11642c3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-27 22:58:42,752 DEBUG [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-05-27 22:58:42,752 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C34323%2C1685228268568.meta:.meta(num 1685228269097) roll requested 2023-05-27 22:58:42,752 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-27 22:58:42,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-27 22:58:42,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-27 22:58:42,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-27 22:58:42,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1107296256, "init": 513802240, "max": 2051014656, "used": 618065848 }, "NonHeapMemoryUsage": { "committed": 139354112, "init": 2555904, "max": -1, "used": 136826400 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-27 22:58:42,753 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41601] master.MasterRpcServices(609): jenkins-hbase4.apache.org,34323,1685228268568 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,34323,1685228268568: Unrecoverable exception while closing hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. ***** Cause: java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-27 22:58:42,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f09cf586fbceb20004891939c6c6856e, disabling compactions & flushes 2023-05-27 22:58:42,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. after waiting 0 ms 2023-05-27 22:58:42,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f09cf586fbceb20004891939c6c6856e: 2023-05-27 22:58:42,754 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,826 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:58:42,831 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-27 22:58:42,831 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-27 22:58:42,948 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(3303): Received CLOSE for db176b74bf6b0df8b876dca558df5ab6 2023-05-27 22:58:42,948 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing db176b74bf6b0df8b876dca558df5ab6, disabling compactions & flushes 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:58:42,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,948 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(3303): Received CLOSE for f09cf586fbceb20004891939c6c6856e 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. after waiting 0 ms 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,948 DEBUG [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1504): Waiting on 1588230740, db176b74bf6b0df8b876dca558df5ab6, f09cf586fbceb20004891939c6c6856e 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for db176b74bf6b0df8b876dca558df5ab6: 2023-05-27 22:58:42,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685228269158.db176b74bf6b0df8b876dca558df5ab6. 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f09cf586fbceb20004891939c6c6856e, disabling compactions & flushes 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-27 22:58:42,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. after waiting 0 ms 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f09cf586fbceb20004891939c6c6856e: 2023-05-27 22:58:42,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685228269699.f09cf586fbceb20004891939c6c6856e. 2023-05-27 22:58:43,149 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-27 22:58:43,149 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34323,1685228268568; all regions closed. 2023-05-27 22:58:43,149 DEBUG [RS:0;jenkins-hbase4:34323] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:58:43,149 INFO [RS:0;jenkins-hbase4:34323] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:58:43,149 INFO [RS:0;jenkins-hbase4:34323] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 22:58:43,149 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:58:43,150 INFO [RS:0;jenkins-hbase4:34323] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34323 2023-05-27 22:58:43,154 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34323,1685228268568 2023-05-27 22:58:43,154 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:58:43,154 ERROR [Listener at localhost/46401-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1607d4a3 rejected from java.util.concurrent.ThreadPoolExecutor@123842bb[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-05-27 22:58:43,154 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:58:43,155 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34323,1685228268568] 2023-05-27 22:58:43,155 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34323,1685228268568; numProcessing=1 2023-05-27 22:58:43,158 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34323,1685228268568 already deleted, retry=false 2023-05-27 22:58:43,158 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34323,1685228268568 expired; onlineServers=0 2023-05-27 22:58:43,158 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41601,1685228268527' ***** 2023-05-27 22:58:43,158 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 22:58:43,158 DEBUG [M:0;jenkins-hbase4:41601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48772212, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:58:43,158 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:58:43,158 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41601,1685228268527; all regions closed. 2023-05-27 22:58:43,158 DEBUG [M:0;jenkins-hbase4:41601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:58:43,158 DEBUG [M:0;jenkins-hbase4:41601] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 22:58:43,158 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 22:58:43,158 DEBUG [M:0;jenkins-hbase4:41601] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 22:58:43,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228268719] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228268719,5,FailOnTimeoutGroup] 2023-05-27 22:58:43,159 INFO [M:0;jenkins-hbase4:41601] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 22:58:43,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228268718] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228268718,5,FailOnTimeoutGroup] 2023-05-27 22:58:43,159 INFO [M:0;jenkins-hbase4:41601] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 22:58:43,160 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 22:58:43,160 INFO [M:0;jenkins-hbase4:41601] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 22:58:43,160 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:43,160 DEBUG [M:0;jenkins-hbase4:41601] master.HMaster(1512): Stopping service threads 2023-05-27 22:58:43,160 INFO [M:0;jenkins-hbase4:41601] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 22:58:43,160 ERROR [M:0;jenkins-hbase4:41601] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 22:58:43,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:58:43,160 INFO [M:0;jenkins-hbase4:41601] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 22:58:43,160 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 22:58:43,161 DEBUG [M:0;jenkins-hbase4:41601] zookeeper.ZKUtil(398): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 22:58:43,161 WARN [M:0;jenkins-hbase4:41601] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 22:58:43,161 INFO [M:0;jenkins-hbase4:41601] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 22:58:43,161 INFO [M:0;jenkins-hbase4:41601] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 22:58:43,162 DEBUG [M:0;jenkins-hbase4:41601] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:58:43,162 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:43,162 DEBUG [M:0;jenkins-hbase4:41601] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:43,162 DEBUG [M:0;jenkins-hbase4:41601] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:58:43,162 DEBUG [M:0;jenkins-hbase4:41601] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:43,162 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.17 KB heapSize=45.78 KB 2023-05-27 22:58:43,175 INFO [M:0;jenkins-hbase4:41601] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.17 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c539e47a11334cea82c747ffb74d2fa6 2023-05-27 22:58:43,182 DEBUG [M:0;jenkins-hbase4:41601] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c539e47a11334cea82c747ffb74d2fa6 as hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c539e47a11334cea82c747ffb74d2fa6 2023-05-27 22:58:43,187 INFO [M:0;jenkins-hbase4:41601] regionserver.HStore(1080): Added hdfs://localhost:44907/user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c539e47a11334cea82c747ffb74d2fa6, entries=11, sequenceid=92, filesize=7.0 K 2023-05-27 22:58:43,188 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegion(2948): Finished flush of dataSize ~38.17 KB/39087, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=92, compaction requested=false 2023-05-27 22:58:43,189 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:43,189 DEBUG [M:0;jenkins-hbase4:41601] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:58:43,190 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f2c390fc-5cba-6482-71d6-36350d7ab12b/MasterData/WALs/jenkins-hbase4.apache.org,41601,1685228268527 2023-05-27 22:58:43,194 INFO [M:0;jenkins-hbase4:41601] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 22:58:43,194 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:58:43,195 INFO [M:0;jenkins-hbase4:41601] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41601 2023-05-27 22:58:43,197 DEBUG [M:0;jenkins-hbase4:41601] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41601,1685228268527 already deleted, retry=false 2023-05-27 22:58:43,255 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:58:43,255 INFO [RS:0;jenkins-hbase4:34323] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34323,1685228268568; zookeeper connection closed. 2023-05-27 22:58:43,255 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): regionserver:34323-0x1006edd2ee10001, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:58:43,256 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@b68a995] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@b68a995 2023-05-27 22:58:43,259 INFO [Listener at localhost/35447] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 22:58:43,355 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:58:43,355 INFO [M:0;jenkins-hbase4:41601] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41601,1685228268527; zookeeper connection closed. 2023-05-27 22:58:43,356 DEBUG [Listener at localhost/46401-EventThread] zookeeper.ZKWatcher(600): master:41601-0x1006edd2ee10000, quorum=127.0.0.1:54282, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:58:43,357 WARN [Listener at localhost/35447] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:58:43,360 INFO [Listener at localhost/35447] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:43,464 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:58:43,465 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1876548742-172.31.14.131-1685228267961 (Datanode Uuid e2024e87-a5d0-4488-9696-4dabd1ce4654) service to localhost/127.0.0.1:44907 2023-05-27 22:58:43,465 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data3/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:43,465 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data4/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:43,467 WARN [Listener at localhost/35447] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:58:43,473 INFO [Listener at localhost/35447] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:43,485 WARN [BP-1876548742-172.31.14.131-1685228267961 heartbeating to localhost/127.0.0.1:44907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1876548742-172.31.14.131-1685228267961 (Datanode Uuid 76e4b823-c9a4-4983-84bf-c3e22be4be22) service to localhost/127.0.0.1:44907 2023-05-27 22:58:43,485 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data1/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:43,486 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/cluster_345137e2-8252-2640-564c-1a1605c278f1/dfs/data/data2/current/BP-1876548742-172.31.14.131-1685228267961] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:58:43,586 INFO [Listener at localhost/35447] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:58:43,697 INFO [Listener at localhost/35447] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 22:58:43,711 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 22:58:43,720 INFO [Listener at localhost/35447] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=87 (was 78) - Thread LEAK? -, OpenFileDescriptor=460 (was 469), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=34 (was 64), ProcessCount=168 (was 168), AvailableMemoryMB=3747 (was 3920) 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=87, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=34, ProcessCount=168, AvailableMemoryMB=3748 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/hadoop.log.dir so I do NOT create it in target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e45b6237-ecd0-f871-b5c5-5ced26aeca8a/hadoop.tmp.dir so I do NOT create it in target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4, deleteOnExit=true 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/test.cache.data in system properties and HBase conf 2023-05-27 22:58:43,728 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/hadoop.log.dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 22:58:43,729 DEBUG [Listener at localhost/35447] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:58:43,729 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/nfs.dump.dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/java.io.tmpdir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 22:58:43,730 INFO [Listener at localhost/35447] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 22:58:43,732 WARN [Listener at localhost/35447] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:58:43,735 WARN [Listener at localhost/35447] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:58:43,735 WARN [Listener at localhost/35447] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:58:43,771 WARN [Listener at localhost/35447] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:43,773 INFO [Listener at localhost/35447] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:43,777 INFO [Listener at localhost/35447] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/java.io.tmpdir/Jetty_localhost_40389_hdfs____fiy2oy/webapp 2023-05-27 22:58:43,867 INFO [Listener at localhost/35447] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40389 2023-05-27 22:58:43,868 WARN [Listener at localhost/35447] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:58:43,872 WARN [Listener at localhost/35447] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:58:43,872 WARN [Listener at localhost/35447] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:58:43,909 WARN [Listener at localhost/45583] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:43,917 WARN [Listener at localhost/45583] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:58:43,919 WARN [Listener at localhost/45583] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:43,921 INFO [Listener at localhost/45583] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:43,925 INFO [Listener at localhost/45583] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/java.io.tmpdir/Jetty_localhost_34343_datanode____.g7whbc/webapp 2023-05-27 22:58:44,017 INFO [Listener at localhost/45583] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34343 2023-05-27 22:58:44,026 WARN [Listener at localhost/43961] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:44,040 WARN [Listener at localhost/43961] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:58:44,042 WARN [Listener at localhost/43961] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:58:44,043 INFO [Listener at localhost/43961] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:58:44,046 INFO [Listener at localhost/43961] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/java.io.tmpdir/Jetty_localhost_37853_datanode____3pwd7d/webapp 2023-05-27 22:58:44,121 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3921fc691797cf5e: Processing first storage report for DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da from datanode bcd28621-0f6f-4ce9-a51f-4c55bdd17df3 2023-05-27 22:58:44,121 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3921fc691797cf5e: from storage DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da node DatanodeRegistration(127.0.0.1:42467, datanodeUuid=bcd28621-0f6f-4ce9-a51f-4c55bdd17df3, infoPort=39245, infoSecurePort=0, ipcPort=43961, storageInfo=lv=-57;cid=testClusterID;nsid=1585538242;c=1685228323738), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:44,121 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3921fc691797cf5e: Processing first storage report for DS-13d31e30-0061-4649-a369-cbf85e5a92bb from datanode bcd28621-0f6f-4ce9-a51f-4c55bdd17df3 2023-05-27 22:58:44,121 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3921fc691797cf5e: from storage DS-13d31e30-0061-4649-a369-cbf85e5a92bb node DatanodeRegistration(127.0.0.1:42467, datanodeUuid=bcd28621-0f6f-4ce9-a51f-4c55bdd17df3, infoPort=39245, infoSecurePort=0, ipcPort=43961, storageInfo=lv=-57;cid=testClusterID;nsid=1585538242;c=1685228323738), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:44,140 INFO [Listener at localhost/43961] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37853 2023-05-27 22:58:44,147 WARN [Listener at localhost/36811] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:58:44,234 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4577470c2823f56: Processing first storage report for DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15 from datanode 27c5eb3d-0791-41b3-8475-1ffe4c25e24a 2023-05-27 22:58:44,234 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4577470c2823f56: from storage DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15 node DatanodeRegistration(127.0.0.1:34983, datanodeUuid=27c5eb3d-0791-41b3-8475-1ffe4c25e24a, infoPort=43459, infoSecurePort=0, ipcPort=36811, storageInfo=lv=-57;cid=testClusterID;nsid=1585538242;c=1685228323738), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:44,234 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4577470c2823f56: Processing first storage report for DS-c9ccefa8-417b-4f9f-807e-d93bae600e62 from datanode 27c5eb3d-0791-41b3-8475-1ffe4c25e24a 2023-05-27 22:58:44,234 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4577470c2823f56: from storage DS-c9ccefa8-417b-4f9f-807e-d93bae600e62 node DatanodeRegistration(127.0.0.1:34983, datanodeUuid=27c5eb3d-0791-41b3-8475-1ffe4c25e24a, infoPort=43459, infoSecurePort=0, ipcPort=36811, storageInfo=lv=-57;cid=testClusterID;nsid=1585538242;c=1685228323738), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:58:44,258 DEBUG [Listener at localhost/36811] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25 2023-05-27 22:58:44,261 INFO [Listener at localhost/36811] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/zookeeper_0, clientPort=54484, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 22:58:44,262 INFO [Listener at localhost/36811] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54484 2023-05-27 22:58:44,262 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,263 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,275 INFO [Listener at localhost/36811] util.FSUtils(471): Created version file at hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98 with version=8 2023-05-27 22:58:44,275 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase-staging 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:58:44,277 INFO [Listener at localhost/36811] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:58:44,279 INFO [Listener at localhost/36811] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40691 2023-05-27 22:58:44,279 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,280 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,281 INFO [Listener at localhost/36811] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40691 connecting to ZooKeeper ensemble=127.0.0.1:54484 2023-05-27 22:58:44,287 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:406910x0, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:58:44,287 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40691-0x1006ede08a70000 connected 2023-05-27 22:58:44,301 DEBUG [Listener at localhost/36811] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:58:44,302 DEBUG [Listener at localhost/36811] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:58:44,302 DEBUG [Listener at localhost/36811] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:58:44,303 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40691 2023-05-27 22:58:44,303 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40691 2023-05-27 22:58:44,303 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40691 2023-05-27 22:58:44,303 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40691 2023-05-27 22:58:44,303 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40691 2023-05-27 22:58:44,304 INFO [Listener at localhost/36811] master.HMaster(444): hbase.rootdir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98, hbase.cluster.distributed=false 2023-05-27 22:58:44,316 INFO [Listener at localhost/36811] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:58:44,317 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:58:44,317 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:58:44,317 INFO [Listener at localhost/36811] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:58:44,317 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:58:44,317 INFO [Listener at localhost/36811] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:58:44,317 INFO [Listener at localhost/36811] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:58:44,318 INFO [Listener at localhost/36811] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38139 2023-05-27 22:58:44,318 INFO [Listener at localhost/36811] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 22:58:44,319 DEBUG [Listener at localhost/36811] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 22:58:44,320 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,321 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,322 INFO [Listener at localhost/36811] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38139 connecting to ZooKeeper ensemble=127.0.0.1:54484 2023-05-27 22:58:44,325 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:381390x0, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:58:44,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38139-0x1006ede08a70001 connected 2023-05-27 22:58:44,326 DEBUG [Listener at localhost/36811] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:58:44,327 DEBUG [Listener at localhost/36811] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:58:44,327 DEBUG [Listener at localhost/36811] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:58:44,329 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38139 2023-05-27 22:58:44,329 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38139 2023-05-27 22:58:44,329 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38139 2023-05-27 22:58:44,330 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38139 2023-05-27 22:58:44,331 DEBUG [Listener at localhost/36811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38139 2023-05-27 22:58:44,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,334 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:58:44,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,335 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:58:44,335 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:58:44,335 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:44,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:58:44,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40691,1685228324276 from backup master directory 2023-05-27 22:58:44,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:58:44,339 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,339 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:58:44,339 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:58:44,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/hbase.id with ID: 8c4e84c2-1510-42df-92f7-5e04ef318a37 2023-05-27 22:58:44,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:44,363 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:44,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x626428a9 to 127.0.0.1:54484 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:58:44,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@85230bd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:58:44,777 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:58:44,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 22:58:44,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:58:44,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store-tmp 2023-05-27 22:58:44,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:44,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:58:44,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:44,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:44,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:58:44,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:44,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:58:44,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:58:44,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/WALs/jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40691%2C1685228324276, suffix=, logDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/WALs/jenkins-hbase4.apache.org,40691,1685228324276, archiveDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/oldWALs, maxLogs=10 2023-05-27 22:58:44,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/WALs/jenkins-hbase4.apache.org,40691,1685228324276/jenkins-hbase4.apache.org%2C40691%2C1685228324276.1685228324848 2023-05-27 22:58:44,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34983,DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15,DISK], DatanodeInfoWithStorage[127.0.0.1:42467,DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da,DISK]] 2023-05-27 22:58:44,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:58:44,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:44,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:58:44,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:58:44,859 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:58:44,860 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 22:58:44,860 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 22:58:44,861 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:44,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:58:44,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:58:44,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:58:44,866 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:58:44,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=746027, jitterRate=-0.05137786269187927}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:58:44,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:58:44,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 22:58:44,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 22:58:44,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 22:58:44,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 22:58:44,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 22:58:44,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 22:58:44,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 22:58:44,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 22:58:44,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 22:58:44,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 22:58:44,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 22:58:44,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 22:58:44,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 22:58:44,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 22:58:44,884 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:44,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 22:58:44,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 22:58:44,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 22:58:44,887 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:58:44,887 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:58:44,887 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:44,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40691,1685228324276, sessionid=0x1006ede08a70000, setting cluster-up flag (Was=false) 2023-05-27 22:58:44,892 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:44,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 22:58:44,897 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,900 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:44,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 22:58:44,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:44,905 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.hbase-snapshot/.tmp 2023-05-27 22:58:44,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:58:44,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685228354911 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 22:58:44,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 22:58:44,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 22:58:44,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 22:58:44,912 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:58:44,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 22:58:44,912 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 22:58:44,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 22:58:44,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228324912,5,FailOnTimeoutGroup] 2023-05-27 22:58:44,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228324913,5,FailOnTimeoutGroup] 2023-05-27 22:58:44,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 22:58:44,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,914 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:58:44,924 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:58:44,924 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:58:44,925 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98 2023-05-27 22:58:44,934 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:44,935 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:58:44,936 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(951): ClusterId : 8c4e84c2-1510-42df-92f7-5e04ef318a37 2023-05-27 22:58:44,938 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 22:58:44,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/info 2023-05-27 22:58:44,939 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:58:44,939 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:44,940 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:58:44,940 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 22:58:44,940 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 22:58:44,941 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:58:44,941 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:58:44,942 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 22:58:44,942 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:44,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:58:44,943 DEBUG [RS:0;jenkins-hbase4:38139] zookeeper.ReadOnlyZKClient(139): Connect 0x787c076e to 127.0.0.1:54484 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:58:44,945 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/table 2023-05-27 22:58:44,945 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:58:44,946 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:44,946 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740 2023-05-27 22:58:44,947 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740 2023-05-27 22:58:44,948 DEBUG [RS:0;jenkins-hbase4:38139] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@76bbf611, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:58:44,948 DEBUG [RS:0;jenkins-hbase4:38139] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72846c1e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:58:44,948 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:58:44,949 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:58:44,955 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:58:44,955 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=831756, jitterRate=0.05763345956802368}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:58:44,956 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:58:44,956 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:58:44,956 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:58:44,956 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:58:44,956 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:58:44,956 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:58:44,956 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:58:44,956 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:58:44,957 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38139 2023-05-27 22:58:44,957 INFO [RS:0;jenkins-hbase4:38139] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 22:58:44,957 INFO [RS:0;jenkins-hbase4:38139] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 22:58:44,957 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 22:58:44,957 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:58:44,957 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 22:58:44,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 22:58:44,957 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,40691,1685228324276 with isa=jenkins-hbase4.apache.org/172.31.14.131:38139, startcode=1685228324316 2023-05-27 22:58:44,957 DEBUG [RS:0;jenkins-hbase4:38139] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 22:58:44,959 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 22:58:44,961 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 22:58:44,962 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53813, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 22:58:44,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:44,964 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98 2023-05-27 22:58:44,964 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45583 2023-05-27 22:58:44,964 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 22:58:44,965 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:58:44,966 DEBUG [RS:0;jenkins-hbase4:38139] zookeeper.ZKUtil(162): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:44,966 WARN [RS:0;jenkins-hbase4:38139] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:58:44,966 INFO [RS:0;jenkins-hbase4:38139] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:58:44,966 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1946): logDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:44,966 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38139,1685228324316] 2023-05-27 22:58:44,970 DEBUG [RS:0;jenkins-hbase4:38139] zookeeper.ZKUtil(162): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:44,970 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 22:58:44,970 INFO [RS:0;jenkins-hbase4:38139] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 22:58:44,974 INFO [RS:0;jenkins-hbase4:38139] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 22:58:44,975 INFO [RS:0;jenkins-hbase4:38139] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:58:44,975 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,975 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 22:58:44,976 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,976 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,976 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,976 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,976 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,976 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,977 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:58:44,977 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,977 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,977 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,977 DEBUG [RS:0;jenkins-hbase4:38139] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:58:44,977 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,977 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,978 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:44,988 INFO [RS:0;jenkins-hbase4:38139] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 22:58:44,988 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38139,1685228324316-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:45,000 INFO [RS:0;jenkins-hbase4:38139] regionserver.Replication(203): jenkins-hbase4.apache.org,38139,1685228324316 started 2023-05-27 22:58:45,000 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38139,1685228324316, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38139, sessionid=0x1006ede08a70001 2023-05-27 22:58:45,000 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 22:58:45,000 DEBUG [RS:0;jenkins-hbase4:38139] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:45,000 DEBUG [RS:0;jenkins-hbase4:38139] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38139,1685228324316' 2023-05-27 22:58:45,000 DEBUG [RS:0;jenkins-hbase4:38139] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:58:45,000 DEBUG [RS:0;jenkins-hbase4:38139] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38139,1685228324316' 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 22:58:45,001 DEBUG [RS:0;jenkins-hbase4:38139] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 22:58:45,001 INFO [RS:0;jenkins-hbase4:38139] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 22:58:45,001 INFO [RS:0;jenkins-hbase4:38139] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 22:58:45,103 INFO [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38139%2C1685228324316, suffix=, logDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316, archiveDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/oldWALs, maxLogs=32 2023-05-27 22:58:45,111 DEBUG [jenkins-hbase4:40691] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 22:58:45,112 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38139,1685228324316, state=OPENING 2023-05-27 22:58:45,113 INFO [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228325104 2023-05-27 22:58:45,113 DEBUG [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42467,DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da,DISK], DatanodeInfoWithStorage[127.0.0.1:34983,DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15,DISK]] 2023-05-27 22:58:45,113 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 22:58:45,114 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:45,115 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:58:45,115 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38139,1685228324316}] 2023-05-27 22:58:45,268 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:45,268 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 22:58:45,271 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 22:58:45,274 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 22:58:45,274 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:58:45,276 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38139%2C1685228324316.meta, suffix=.meta, logDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316, archiveDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/oldWALs, maxLogs=32 2023-05-27 22:58:45,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.meta.1685228325276.meta 2023-05-27 22:58:45,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34983,DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15,DISK], DatanodeInfoWithStorage[127.0.0.1:42467,DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da,DISK]] 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 22:58:45,283 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 22:58:45,283 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 22:58:45,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:58:45,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/info 2023-05-27 22:58:45,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/info 2023-05-27 22:58:45,286 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:58:45,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:45,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:58:45,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:58:45,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:58:45,287 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:58:45,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:45,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:58:45,289 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/table 2023-05-27 22:58:45,289 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/table 2023-05-27 22:58:45,289 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:58:45,289 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:45,290 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740 2023-05-27 22:58:45,291 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740 2023-05-27 22:58:45,293 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:58:45,294 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:58:45,295 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=780410, jitterRate=-0.007658600807189941}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:58:45,295 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:58:45,296 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685228325268 2023-05-27 22:58:45,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 22:58:45,301 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 22:58:45,302 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38139,1685228324316, state=OPEN 2023-05-27 22:58:45,304 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 22:58:45,304 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:58:45,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 22:58:45,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38139,1685228324316 in 189 msec 2023-05-27 22:58:45,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 22:58:45,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 350 msec 2023-05-27 22:58:45,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 403 msec 2023-05-27 22:58:45,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685228325311, completionTime=-1 2023-05-27 22:58:45,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 22:58:45,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 22:58:45,314 DEBUG [hconnection-0x142cd4fb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:58:45,316 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41866, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:58:45,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 22:58:45,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685228385317 2023-05-27 22:58:45,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685228445317 2023-05-27 22:58:45,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-27 22:58:45,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40691,1685228324276-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40691,1685228324276-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40691,1685228324276-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40691, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 22:58:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 22:58:45,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:58:45,324 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 22:58:45,324 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 22:58:45,325 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:58:45,326 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:58:45,328 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,328 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc empty. 2023-05-27 22:58:45,329 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,329 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 22:58:45,341 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 22:58:45,342 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => eab518ddf145c5cb7d4c7bb9336d6efc, NAME => 'hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp 2023-05-27 22:58:45,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:45,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing eab518ddf145c5cb7d4c7bb9336d6efc, disabling compactions & flushes 2023-05-27 22:58:45,350 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. after waiting 0 ms 2023-05-27 22:58:45,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,350 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for eab518ddf145c5cb7d4c7bb9336d6efc: 2023-05-27 22:58:45,353 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:58:45,354 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228325353"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228325353"}]},"ts":"1685228325353"} 2023-05-27 22:58:45,356 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:58:45,357 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:58:45,357 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228325357"}]},"ts":"1685228325357"} 2023-05-27 22:58:45,359 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 22:58:45,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=eab518ddf145c5cb7d4c7bb9336d6efc, ASSIGN}] 2023-05-27 22:58:45,368 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=eab518ddf145c5cb7d4c7bb9336d6efc, ASSIGN 2023-05-27 22:58:45,369 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=eab518ddf145c5cb7d4c7bb9336d6efc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38139,1685228324316; forceNewPlan=false, retain=false 2023-05-27 22:58:45,520 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=eab518ddf145c5cb7d4c7bb9336d6efc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:45,520 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228325520"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228325520"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228325520"}]},"ts":"1685228325520"} 2023-05-27 22:58:45,522 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure eab518ddf145c5cb7d4c7bb9336d6efc, server=jenkins-hbase4.apache.org,38139,1685228324316}] 2023-05-27 22:58:45,678 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eab518ddf145c5cb7d4c7bb9336d6efc, NAME => 'hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:58:45,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:45,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,680 INFO [StoreOpener-eab518ddf145c5cb7d4c7bb9336d6efc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,681 DEBUG [StoreOpener-eab518ddf145c5cb7d4c7bb9336d6efc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/info 2023-05-27 22:58:45,681 DEBUG [StoreOpener-eab518ddf145c5cb7d4c7bb9336d6efc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/info 2023-05-27 22:58:45,682 INFO [StoreOpener-eab518ddf145c5cb7d4c7bb9336d6efc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eab518ddf145c5cb7d4c7bb9336d6efc columnFamilyName info 2023-05-27 22:58:45,682 INFO [StoreOpener-eab518ddf145c5cb7d4c7bb9336d6efc-1] regionserver.HStore(310): Store=eab518ddf145c5cb7d4c7bb9336d6efc/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:45,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:58:45,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:58:45,688 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eab518ddf145c5cb7d4c7bb9336d6efc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=838522, jitterRate=0.06623706221580505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:58:45,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eab518ddf145c5cb7d4c7bb9336d6efc: 2023-05-27 22:58:45,689 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc., pid=6, masterSystemTime=1685228325675 2023-05-27 22:58:45,692 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,692 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:45,692 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=eab518ddf145c5cb7d4c7bb9336d6efc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:45,692 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228325692"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228325692"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228325692"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228325692"}]},"ts":"1685228325692"} 2023-05-27 22:58:45,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 22:58:45,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure eab518ddf145c5cb7d4c7bb9336d6efc, server=jenkins-hbase4.apache.org,38139,1685228324316 in 172 msec 2023-05-27 22:58:45,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 22:58:45,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=eab518ddf145c5cb7d4c7bb9336d6efc, ASSIGN in 330 msec 2023-05-27 22:58:45,699 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:58:45,699 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228325699"}]},"ts":"1685228325699"} 2023-05-27 22:58:45,701 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 22:58:45,703 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:58:45,705 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 380 msec 2023-05-27 22:58:45,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 22:58:45,727 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:58:45,727 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:45,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 22:58:45,738 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:58:45,742 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-05-27 22:58:45,752 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 22:58:45,758 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:58:45,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-27 22:58:45,775 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 22:58:45,778 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 22:58:45,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.440sec 2023-05-27 22:58:45,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 22:58:45,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 22:58:45,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 22:58:45,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40691,1685228324276-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 22:58:45,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40691,1685228324276-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 22:58:45,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 22:58:45,837 DEBUG [Listener at localhost/36811] zookeeper.ReadOnlyZKClient(139): Connect 0x7a54aea9 to 127.0.0.1:54484 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:58:45,841 DEBUG [Listener at localhost/36811] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3eb7ccd0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:58:45,842 DEBUG [hconnection-0x4f5b2c7d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:58:45,844 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41874, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:58:45,846 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:58:45,846 INFO [Listener at localhost/36811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:58:45,849 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 22:58:45,849 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:58:45,849 INFO [Listener at localhost/36811] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 22:58:45,851 DEBUG [Listener at localhost/36811] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 22:58:45,853 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40954, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 22:58:45,855 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 22:58:45,855 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 22:58:45,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:58:45,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:58:45,858 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:58:45,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-27 22:58:45,859 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:58:45,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:58:45,861 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:45,861 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df empty. 2023-05-27 22:58:45,862 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:45,862 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-27 22:58:45,874 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-27 22:58:45,875 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => d5a304880ee82d316c8dac1e8851e2df, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/.tmp 2023-05-27 22:58:45,885 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:45,885 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing d5a304880ee82d316c8dac1e8851e2df, disabling compactions & flushes 2023-05-27 22:58:45,885 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:45,885 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:45,885 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. after waiting 0 ms 2023-05-27 22:58:45,885 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:45,885 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:45,885 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:58:45,887 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:58:45,888 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685228325888"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228325888"}]},"ts":"1685228325888"} 2023-05-27 22:58:45,890 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:58:45,891 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:58:45,891 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228325891"}]},"ts":"1685228325891"} 2023-05-27 22:58:45,892 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-27 22:58:45,897 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d5a304880ee82d316c8dac1e8851e2df, ASSIGN}] 2023-05-27 22:58:45,898 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d5a304880ee82d316c8dac1e8851e2df, ASSIGN 2023-05-27 22:58:45,899 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d5a304880ee82d316c8dac1e8851e2df, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38139,1685228324316; forceNewPlan=false, retain=false 2023-05-27 22:58:46,050 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d5a304880ee82d316c8dac1e8851e2df, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:46,050 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685228326050"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228326050"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228326050"}]},"ts":"1685228326050"} 2023-05-27 22:58:46,052 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure d5a304880ee82d316c8dac1e8851e2df, server=jenkins-hbase4.apache.org,38139,1685228324316}] 2023-05-27 22:58:46,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d5a304880ee82d316c8dac1e8851e2df, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:58:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:58:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,209 INFO [StoreOpener-d5a304880ee82d316c8dac1e8851e2df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,211 DEBUG [StoreOpener-d5a304880ee82d316c8dac1e8851e2df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info 2023-05-27 22:58:46,211 DEBUG [StoreOpener-d5a304880ee82d316c8dac1e8851e2df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info 2023-05-27 22:58:46,211 INFO [StoreOpener-d5a304880ee82d316c8dac1e8851e2df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d5a304880ee82d316c8dac1e8851e2df columnFamilyName info 2023-05-27 22:58:46,212 INFO [StoreOpener-d5a304880ee82d316c8dac1e8851e2df-1] regionserver.HStore(310): Store=d5a304880ee82d316c8dac1e8851e2df/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:58:46,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:58:46,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:58:46,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d5a304880ee82d316c8dac1e8851e2df; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=793086, jitterRate=0.008461996912956238}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:58:46,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:58:46,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df., pid=11, masterSystemTime=1685228326204 2023-05-27 22:58:46,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:46,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:46,222 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d5a304880ee82d316c8dac1e8851e2df, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:46,222 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685228326222"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228326222"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228326222"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228326222"}]},"ts":"1685228326222"} 2023-05-27 22:58:46,226 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 22:58:46,226 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure d5a304880ee82d316c8dac1e8851e2df, server=jenkins-hbase4.apache.org,38139,1685228324316 in 172 msec 2023-05-27 22:58:46,228 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 22:58:46,228 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=d5a304880ee82d316c8dac1e8851e2df, ASSIGN in 329 msec 2023-05-27 22:58:46,229 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:58:46,229 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228326229"}]},"ts":"1685228326229"} 2023-05-27 22:58:46,231 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-27 22:58:46,234 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:58:46,236 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 379 msec 2023-05-27 22:58:50,834 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 22:58:50,970 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:58:55,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:58:55,861 INFO [Listener at localhost/36811] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-27 22:58:55,863 DEBUG [Listener at localhost/36811] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:58:55,863 DEBUG [Listener at localhost/36811] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:58:55,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 22:58:55,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-27 22:58:55,884 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-27 22:58:55,884 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:58:55,884 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-27 22:58:55,884 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-27 22:58:55,885 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,885 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 22:58:55,886 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:58:55,886 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,886 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:58:55,886 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:58:55,886 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,886 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 22:58:55,886 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 22:58:55,887 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,887 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 22:58:55,887 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 22:58:55,888 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-27 22:58:55,889 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-27 22:58:55,889 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-27 22:58:55,890 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:58:55,890 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-27 22:58:55,891 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 22:58:55,891 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 22:58:55,891 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:55,891 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. started... 2023-05-27 22:58:55,891 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing eab518ddf145c5cb7d4c7bb9336d6efc 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 22:58:55,902 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/.tmp/info/2d6ae6786ad146eab6545fa3d858c459 2023-05-27 22:58:55,909 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/.tmp/info/2d6ae6786ad146eab6545fa3d858c459 as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/info/2d6ae6786ad146eab6545fa3d858c459 2023-05-27 22:58:55,915 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/info/2d6ae6786ad146eab6545fa3d858c459, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 22:58:55,915 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for eab518ddf145c5cb7d4c7bb9336d6efc in 24ms, sequenceid=6, compaction requested=false 2023-05-27 22:58:55,916 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for eab518ddf145c5cb7d4c7bb9336d6efc: 2023-05-27 22:58:55,916 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:58:55,916 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 22:58:55,916 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 22:58:55,916 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,916 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-27 22:58:55,916 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-27 22:58:55,918 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,918 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:58:55,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:58:55,919 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,919 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 22:58:55,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:58:55,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:58:55,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 22:58:55,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:58:55,920 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-27 22:58:55,920 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-27 22:58:55,920 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4962d8b3[Count = 0] remaining members to acquire global barrier 2023-05-27 22:58:55,920 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,925 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,925 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,925 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,925 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-27 22:58:55,925 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-27 22:58:55,925 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,925 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 22:58:55,925 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,38139,1685228324316' in zk 2023-05-27 22:58:55,927 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,927 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-27 22:58:55,927 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,927 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:58:55,927 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:58:55,927 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:58:55,927 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-27 22:58:55,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:58:55,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:58:55,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 22:58:55,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:58:55,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 22:58:55,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,930 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,38139,1685228324316': 2023-05-27 22:58:55,930 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-27 22:58:55,930 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 22:58:55,930 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-27 22:58:55,930 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 22:58:55,930 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-27 22:58:55,930 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 22:58:55,932 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,932 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,932 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:58:55,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:58:55,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:58:55,932 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:58:55,932 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:58:55,932 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:58:55,933 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 22:58:55,933 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,933 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:58:55,933 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 22:58:55,934 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,934 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,934 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:58:55,934 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-27 22:58:55,934 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-27 22:58:55,940 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:58:55,940 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-27 22:58:55,940 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-27 22:58:55,940 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:58:55,940 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:58:55,940 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:58:55,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-27 22:58:55,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 22:58:55,943 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-27 22:58:55,944 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 22:59:05,944 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 22:59:05,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 22:59:05,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 22:59:05,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,961 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:05,962 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:05,962 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 22:59:05,962 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 22:59:05,963 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,963 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,964 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:05,964 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:05,964 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:05,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:05,964 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:05,964 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 22:59:05,964 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,965 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,965 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 22:59:05,965 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,965 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,965 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,965 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 22:59:05,965 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:05,966 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 22:59:05,966 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 22:59:05,966 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 22:59:05,966 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:05,966 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. started... 2023-05-27 22:59:05,967 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d5a304880ee82d316c8dac1e8851e2df 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 22:59:05,981 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/b3c0639ba64444439bdc5ce70ab8302d 2023-05-27 22:59:05,988 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/b3c0639ba64444439bdc5ce70ab8302d as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/b3c0639ba64444439bdc5ce70ab8302d 2023-05-27 22:59:05,995 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/b3c0639ba64444439bdc5ce70ab8302d, entries=1, sequenceid=5, filesize=5.8 K 2023-05-27 22:59:05,996 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d5a304880ee82d316c8dac1e8851e2df in 29ms, sequenceid=5, compaction requested=false 2023-05-27 22:59:05,997 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:59:05,997 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:05,997 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 22:59:05,997 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 22:59:05,997 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:05,997 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 22:59:05,997 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 22:59:05,999 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:05,999 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:05,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:05,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:05,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:06,000 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,000 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 22:59:06,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:06,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:06,001 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,001 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,001 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:06,002 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 22:59:06,002 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4d5d5547[Count = 0] remaining members to acquire global barrier 2023-05-27 22:59:06,002 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 22:59:06,002 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,003 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,003 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,003 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,003 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 22:59:06,003 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 22:59:06,003 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,003 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 22:59:06,003 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38139,1685228324316' in zk 2023-05-27 22:59:06,006 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,006 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 22:59:06,006 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,006 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:06,006 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:06,006 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:06,006 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 22:59:06,007 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:06,007 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:06,007 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:06,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,009 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38139,1685228324316': 2023-05-27 22:59:06,009 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 22:59:06,009 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 22:59:06,009 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 22:59:06,009 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 22:59:06,009 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,009 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 22:59:06,017 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,017 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:06,018 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:06,018 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:06,018 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,018 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,018 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,019 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:06,019 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,019 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,019 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,020 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:06,020 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,020 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,022 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,022 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:06,022 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,023 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 22:59:06,023 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:06,023 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:06,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-27 22:59:06,023 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:06,023 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,023 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 22:59:06,023 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:06,023 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:06,023 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:06,023 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 22:59:06,024 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,024 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:06,024 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:07,121 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3921fc691797cf5f: from storage DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da node DatanodeRegistration(127.0.0.1:42467, datanodeUuid=bcd28621-0f6f-4ce9-a51f-4c55bdd17df3, infoPort=39245, infoSecurePort=0, ipcPort=43961, storageInfo=lv=-57;cid=testClusterID;nsid=1585538242;c=1685228323738), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:59:07,122 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3921fc691797cf5f: from storage DS-13d31e30-0061-4649-a369-cbf85e5a92bb node DatanodeRegistration(127.0.0.1:42467, datanodeUuid=bcd28621-0f6f-4ce9-a51f-4c55bdd17df3, infoPort=39245, infoSecurePort=0, ipcPort=43961, storageInfo=lv=-57;cid=testClusterID;nsid=1585538242;c=1685228323738), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:59:16,024 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 22:59:16,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 22:59:16,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 22:59:16,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-27 22:59:16,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,034 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:16,034 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:16,035 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 22:59:16,035 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 22:59:16,035 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,035 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,037 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,037 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:16,038 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:16,038 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:16,038 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,038 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 22:59:16,038 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,038 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 22:59:16,039 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,039 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,039 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-27 22:59:16,039 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,039 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 22:59:16,039 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:16,039 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 22:59:16,039 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 22:59:16,040 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 22:59:16,040 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:16,040 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. started... 2023-05-27 22:59:16,040 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d5a304880ee82d316c8dac1e8851e2df 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 22:59:16,049 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/0fb99053f8d043ff9ac70fc649be26be 2023-05-27 22:59:16,057 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/0fb99053f8d043ff9ac70fc649be26be as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/0fb99053f8d043ff9ac70fc649be26be 2023-05-27 22:59:16,062 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/0fb99053f8d043ff9ac70fc649be26be, entries=1, sequenceid=9, filesize=5.8 K 2023-05-27 22:59:16,063 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d5a304880ee82d316c8dac1e8851e2df in 23ms, sequenceid=9, compaction requested=false 2023-05-27 22:59:16,063 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:59:16,063 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:16,063 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 22:59:16,063 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 22:59:16,063 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,063 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 22:59:16,063 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 22:59:16,065 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,065 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,065 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:16,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:16,066 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,066 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 22:59:16,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:16,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:16,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:16,067 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 22:59:16,067 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@85fd288[Count = 0] remaining members to acquire global barrier 2023-05-27 22:59:16,067 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 22:59:16,067 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,068 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,068 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,069 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,069 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 22:59:16,069 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 22:59:16,069 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38139,1685228324316' in zk 2023-05-27 22:59:16,069 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,069 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 22:59:16,070 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 22:59:16,070 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,070 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:16,071 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,071 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:16,071 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:16,071 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 22:59:16,071 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:16,071 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:16,072 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,072 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,072 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:16,072 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,073 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,073 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38139,1685228324316': 2023-05-27 22:59:16,073 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 22:59:16,073 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 22:59:16,073 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 22:59:16,073 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 22:59:16,073 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,073 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 22:59:16,076 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,076 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:16,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:16,076 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,076 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:16,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:16,076 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:16,076 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:16,076 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:16,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,077 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,079 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,079 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:16,079 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,079 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:16,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:16,082 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:16,082 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:16,082 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:16,083 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,083 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 22:59:16,083 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:16,083 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 22:59:16,083 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:16,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:26,083 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 22:59:26,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 22:59:26,096 INFO [Listener at localhost/36811] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228325104 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228366086 2023-05-27 22:59:26,096 DEBUG [Listener at localhost/36811] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42467,DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da,DISK], DatanodeInfoWithStorage[127.0.0.1:34983,DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15,DISK]] 2023-05-27 22:59:26,096 DEBUG [Listener at localhost/36811] wal.AbstractFSWAL(716): hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228325104 is not closed yet, will try archiving it next time 2023-05-27 22:59:26,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 22:59:26,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-27 22:59:26,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,104 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:26,104 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:26,104 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 22:59:26,104 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 22:59:26,104 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,105 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,106 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,106 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:26,106 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:26,106 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:26,106 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,106 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 22:59:26,106 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,107 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,107 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 22:59:26,107 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,107 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,107 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-27 22:59:26,107 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,107 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 22:59:26,107 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:26,108 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 22:59:26,108 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 22:59:26,108 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 22:59:26,108 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:26,108 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. started... 2023-05-27 22:59:26,108 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d5a304880ee82d316c8dac1e8851e2df 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 22:59:26,119 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/905b3947e31c45bb970174e597432fb4 2023-05-27 22:59:26,128 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/905b3947e31c45bb970174e597432fb4 as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/905b3947e31c45bb970174e597432fb4 2023-05-27 22:59:26,133 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/905b3947e31c45bb970174e597432fb4, entries=1, sequenceid=13, filesize=5.8 K 2023-05-27 22:59:26,134 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d5a304880ee82d316c8dac1e8851e2df in 26ms, sequenceid=13, compaction requested=true 2023-05-27 22:59:26,134 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:59:26,135 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:26,135 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 22:59:26,135 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 22:59:26,135 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,135 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 22:59:26,135 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 22:59:26,137 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,137 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:26,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:26,137 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,137 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 22:59:26,137 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:26,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:26,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,138 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,139 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:26,139 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 22:59:26,139 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@66e50132[Count = 0] remaining members to acquire global barrier 2023-05-27 22:59:26,139 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 22:59:26,139 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,142 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,142 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,143 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,143 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 22:59:26,143 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,143 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 22:59:26,143 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 22:59:26,143 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38139,1685228324316' in zk 2023-05-27 22:59:26,144 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 22:59:26,144 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,144 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:26,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:26,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:26,145 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 22:59:26,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:26,145 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:26,146 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,146 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,146 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:26,146 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,147 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,147 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38139,1685228324316': 2023-05-27 22:59:26,147 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 22:59:26,147 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 22:59:26,147 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 22:59:26,147 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 22:59:26,147 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,147 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 22:59:26,149 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,149 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:26,149 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,149 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:26,149 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:26,149 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,149 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,150 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:26,150 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,151 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:26,151 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,152 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,155 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:26,155 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:26,155 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:26,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:26,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 22:59:26,156 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:26,156 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:26,155 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,156 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,156 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:26,156 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 22:59:26,156 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 22:59:36,156 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 22:59:36,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 22:59:36,158 DEBUG [Listener at localhost/36811] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 22:59:36,161 DEBUG [Listener at localhost/36811] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 22:59:36,161 DEBUG [Listener at localhost/36811] regionserver.HStore(1912): d5a304880ee82d316c8dac1e8851e2df/info is initiating minor compaction (all files) 2023-05-27 22:59:36,161 INFO [Listener at localhost/36811] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:59:36,162 INFO [Listener at localhost/36811] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:36,162 INFO [Listener at localhost/36811] regionserver.HRegion(2259): Starting compaction of d5a304880ee82d316c8dac1e8851e2df/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:36,162 INFO [Listener at localhost/36811] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/b3c0639ba64444439bdc5ce70ab8302d, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/0fb99053f8d043ff9ac70fc649be26be, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/905b3947e31c45bb970174e597432fb4] into tmpdir=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp, totalSize=17.4 K 2023-05-27 22:59:36,163 DEBUG [Listener at localhost/36811] compactions.Compactor(207): Compacting b3c0639ba64444439bdc5ce70ab8302d, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685228345954 2023-05-27 22:59:36,163 DEBUG [Listener at localhost/36811] compactions.Compactor(207): Compacting 0fb99053f8d043ff9ac70fc649be26be, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685228356026 2023-05-27 22:59:36,164 DEBUG [Listener at localhost/36811] compactions.Compactor(207): Compacting 905b3947e31c45bb970174e597432fb4, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685228366085 2023-05-27 22:59:36,174 INFO [Listener at localhost/36811] throttle.PressureAwareThroughputController(145): d5a304880ee82d316c8dac1e8851e2df#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 22:59:36,188 DEBUG [Listener at localhost/36811] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/dbf2e0da974c4b56b489a5e64e672e50 as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/dbf2e0da974c4b56b489a5e64e672e50 2023-05-27 22:59:36,194 INFO [Listener at localhost/36811] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d5a304880ee82d316c8dac1e8851e2df/info of d5a304880ee82d316c8dac1e8851e2df into dbf2e0da974c4b56b489a5e64e672e50(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 22:59:36,194 DEBUG [Listener at localhost/36811] regionserver.HRegion(2289): Compaction status journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:59:36,209 INFO [Listener at localhost/36811] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228366086 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228376196 2023-05-27 22:59:36,209 DEBUG [Listener at localhost/36811] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34983,DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15,DISK], DatanodeInfoWithStorage[127.0.0.1:42467,DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da,DISK]] 2023-05-27 22:59:36,209 DEBUG [Listener at localhost/36811] wal.AbstractFSWAL(716): hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228366086 is not closed yet, will try archiving it next time 2023-05-27 22:59:36,210 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228325104 to hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/oldWALs/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228325104 2023-05-27 22:59:36,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-27 22:59:36,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-27 22:59:36,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,216 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:36,216 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:36,217 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-27 22:59:36,217 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-27 22:59:36,217 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,217 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,221 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,221 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:36,222 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:36,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:36,222 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,222 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-27 22:59:36,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-27 22:59:36,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,222 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,223 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-27 22:59:36,223 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,223 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-27 22:59:36,223 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-27 22:59:36,223 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-27 22:59:36,223 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-27 22:59:36,223 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-27 22:59:36,223 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:36,223 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. started... 2023-05-27 22:59:36,223 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing d5a304880ee82d316c8dac1e8851e2df 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 22:59:36,232 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/ba227e103055452cb6ac9adbf344c5fa 2023-05-27 22:59:36,237 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/ba227e103055452cb6ac9adbf344c5fa as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/ba227e103055452cb6ac9adbf344c5fa 2023-05-27 22:59:36,242 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/ba227e103055452cb6ac9adbf344c5fa, entries=1, sequenceid=18, filesize=5.8 K 2023-05-27 22:59:36,243 INFO [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d5a304880ee82d316c8dac1e8851e2df in 20ms, sequenceid=18, compaction requested=false 2023-05-27 22:59:36,243 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:59:36,243 DEBUG [rs(jenkins-hbase4.apache.org,38139,1685228324316)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:36,243 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-27 22:59:36,243 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-27 22:59:36,244 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,244 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-27 22:59:36,244 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-27 22:59:36,246 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,246 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,246 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,246 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:36,246 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:36,246 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,247 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-27 22:59:36,247 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:36,247 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:36,247 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,247 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,247 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:36,248 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,38139,1685228324316' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-27 22:59:36,248 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@391dfa1d[Count = 0] remaining members to acquire global barrier 2023-05-27 22:59:36,248 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-27 22:59:36,248 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,249 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,249 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,249 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,249 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-27 22:59:36,249 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-27 22:59:36,249 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,38139,1685228324316' in zk 2023-05-27 22:59:36,249 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,249 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-27 22:59:36,252 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,252 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-27 22:59:36,252 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,252 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:36,252 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:36,252 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:36,252 DEBUG [member: 'jenkins-hbase4.apache.org,38139,1685228324316' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-27 22:59:36,253 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:36,253 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:36,253 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,254 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,254 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:36,254 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,254 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,255 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,38139,1685228324316': 2023-05-27 22:59:36,255 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,38139,1685228324316' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-27 22:59:36,255 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-27 22:59:36,255 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-27 22:59:36,255 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-27 22:59:36,255 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,255 INFO [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-27 22:59:36,256 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,256 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,256 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-27 22:59:36,256 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,256 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-27 22:59:36,257 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,257 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,257 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-27 22:59:36,258 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,258 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,258 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,258 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-27 22:59:36,259 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,259 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,262 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,262 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-27 22:59:36,262 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,262 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-27 22:59:36,262 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,262 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-27 22:59:36,262 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:36,263 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:36,263 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,263 DEBUG [(jenkins-hbase4.apache.org,40691,1685228324276)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-27 22:59:36,263 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-27 22:59:36,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:36,263 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-27 22:59:36,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-27 22:59:36,263 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-27 22:59:36,264 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-27 22:59:36,264 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-27 22:59:46,264 DEBUG [Listener at localhost/36811] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-27 22:59:46,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40691] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-27 22:59:46,276 INFO [Listener at localhost/36811] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228376196 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228386267 2023-05-27 22:59:46,276 DEBUG [Listener at localhost/36811] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42467,DS-b1650dfb-be31-4cf1-a2ab-111e9fe0a3da,DISK], DatanodeInfoWithStorage[127.0.0.1:34983,DS-b4db70ee-5a7a-46c1-81d2-e844f25d2a15,DISK]] 2023-05-27 22:59:46,276 DEBUG [Listener at localhost/36811] wal.AbstractFSWAL(716): hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228376196 is not closed yet, will try archiving it next time 2023-05-27 22:59:46,276 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228366086 to hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/oldWALs/jenkins-hbase4.apache.org%2C38139%2C1685228324316.1685228366086 2023-05-27 22:59:46,276 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 22:59:46,277 INFO [Listener at localhost/36811] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 22:59:46,277 DEBUG [Listener at localhost/36811] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a54aea9 to 127.0.0.1:54484 2023-05-27 22:59:46,277 DEBUG [Listener at localhost/36811] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:59:46,277 DEBUG [Listener at localhost/36811] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 22:59:46,277 DEBUG [Listener at localhost/36811] util.JVMClusterUtil(257): Found active master hash=1335287126, stopped=false 2023-05-27 22:59:46,277 INFO [Listener at localhost/36811] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:59:46,283 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:59:46,283 INFO [Listener at localhost/36811] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 22:59:46,283 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:46,283 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 22:59:46,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:59:46,283 DEBUG [Listener at localhost/36811] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x626428a9 to 127.0.0.1:54484 2023-05-27 22:59:46,284 DEBUG [Listener at localhost/36811] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:59:46,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:59:46,284 INFO [Listener at localhost/36811] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38139,1685228324316' ***** 2023-05-27 22:59:46,284 INFO [Listener at localhost/36811] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 22:59:46,285 INFO [RS:0;jenkins-hbase4:38139] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 22:59:46,285 INFO [RS:0;jenkins-hbase4:38139] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 22:59:46,285 INFO [RS:0;jenkins-hbase4:38139] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 22:59:46,285 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(3303): Received CLOSE for d5a304880ee82d316c8dac1e8851e2df 2023-05-27 22:59:46,285 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 22:59:46,285 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(3303): Received CLOSE for eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:59:46,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d5a304880ee82d316c8dac1e8851e2df, disabling compactions & flushes 2023-05-27 22:59:46,286 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:46,286 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:46,286 DEBUG [RS:0;jenkins-hbase4:38139] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x787c076e to 127.0.0.1:54484 2023-05-27 22:59:46,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:46,286 DEBUG [RS:0;jenkins-hbase4:38139] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:59:46,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. after waiting 0 ms 2023-05-27 22:59:46,286 INFO [RS:0;jenkins-hbase4:38139] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 22:59:46,286 INFO [RS:0;jenkins-hbase4:38139] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 22:59:46,286 INFO [RS:0;jenkins-hbase4:38139] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 22:59:46,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:46,286 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 22:59:46,286 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d5a304880ee82d316c8dac1e8851e2df 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-27 22:59:46,286 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-27 22:59:46,286 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1478): Online Regions={d5a304880ee82d316c8dac1e8851e2df=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df., 1588230740=hbase:meta,,1.1588230740, eab518ddf145c5cb7d4c7bb9336d6efc=hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc.} 2023-05-27 22:59:46,287 DEBUG [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1504): Waiting on 1588230740, d5a304880ee82d316c8dac1e8851e2df, eab518ddf145c5cb7d4c7bb9336d6efc 2023-05-27 22:59:46,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:59:46,288 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:59:46,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:59:46,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:59:46,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:59:46,288 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-27 22:59:46,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/2ea8fe300a3a4df79725b7f08e93c7b6 2023-05-27 22:59:46,305 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/.tmp/info/357eae595dc74b078a24306abc4da7e3 2023-05-27 22:59:46,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/.tmp/info/2ea8fe300a3a4df79725b7f08e93c7b6 as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/2ea8fe300a3a4df79725b7f08e93c7b6 2023-05-27 22:59:46,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/2ea8fe300a3a4df79725b7f08e93c7b6, entries=1, sequenceid=22, filesize=5.8 K 2023-05-27 22:59:46,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d5a304880ee82d316c8dac1e8851e2df in 33ms, sequenceid=22, compaction requested=true 2023-05-27 22:59:46,326 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/.tmp/table/72fb2d6ada834e83a72d54e4a47cc4ef 2023-05-27 22:59:46,330 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/b3c0639ba64444439bdc5ce70ab8302d, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/0fb99053f8d043ff9ac70fc649be26be, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/905b3947e31c45bb970174e597432fb4] to archive 2023-05-27 22:59:46,331 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 22:59:46,333 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/b3c0639ba64444439bdc5ce70ab8302d to hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/b3c0639ba64444439bdc5ce70ab8302d 2023-05-27 22:59:46,335 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/0fb99053f8d043ff9ac70fc649be26be to hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/0fb99053f8d043ff9ac70fc649be26be 2023-05-27 22:59:46,336 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/.tmp/info/357eae595dc74b078a24306abc4da7e3 as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/info/357eae595dc74b078a24306abc4da7e3 2023-05-27 22:59:46,336 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/905b3947e31c45bb970174e597432fb4 to hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/info/905b3947e31c45bb970174e597432fb4 2023-05-27 22:59:46,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/d5a304880ee82d316c8dac1e8851e2df/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-27 22:59:46,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:46,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d5a304880ee82d316c8dac1e8851e2df: 2023-05-27 22:59:46,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685228325855.d5a304880ee82d316c8dac1e8851e2df. 2023-05-27 22:59:46,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eab518ddf145c5cb7d4c7bb9336d6efc, disabling compactions & flushes 2023-05-27 22:59:46,347 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/info/357eae595dc74b078a24306abc4da7e3, entries=20, sequenceid=14, filesize=7.6 K 2023-05-27 22:59:46,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:59:46,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:59:46,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. after waiting 0 ms 2023-05-27 22:59:46,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:59:46,348 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/.tmp/table/72fb2d6ada834e83a72d54e4a47cc4ef as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/table/72fb2d6ada834e83a72d54e4a47cc4ef 2023-05-27 22:59:46,352 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/namespace/eab518ddf145c5cb7d4c7bb9336d6efc/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 22:59:46,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:59:46,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eab518ddf145c5cb7d4c7bb9336d6efc: 2023-05-27 22:59:46,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685228325323.eab518ddf145c5cb7d4c7bb9336d6efc. 2023-05-27 22:59:46,355 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/table/72fb2d6ada834e83a72d54e4a47cc4ef, entries=4, sequenceid=14, filesize=4.9 K 2023-05-27 22:59:46,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 68ms, sequenceid=14, compaction requested=false 2023-05-27 22:59:46,362 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-27 22:59:46,362 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 22:59:46,362 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:59:46,362 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:59:46,362 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 22:59:46,488 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38139,1685228324316; all regions closed. 2023-05-27 22:59:46,488 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:46,494 DEBUG [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/oldWALs 2023-05-27 22:59:46,494 INFO [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C38139%2C1685228324316.meta:.meta(num 1685228325276) 2023-05-27 22:59:46,494 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/WALs/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:46,500 DEBUG [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/oldWALs 2023-05-27 22:59:46,500 INFO [RS:0;jenkins-hbase4:38139] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C38139%2C1685228324316:(num 1685228386267) 2023-05-27 22:59:46,500 DEBUG [RS:0;jenkins-hbase4:38139] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:59:46,500 INFO [RS:0;jenkins-hbase4:38139] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:59:46,500 INFO [RS:0;jenkins-hbase4:38139] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 22:59:46,500 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:59:46,501 INFO [RS:0;jenkins-hbase4:38139] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38139 2023-05-27 22:59:46,504 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38139,1685228324316 2023-05-27 22:59:46,504 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:59:46,504 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:59:46,505 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38139,1685228324316] 2023-05-27 22:59:46,505 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38139,1685228324316; numProcessing=1 2023-05-27 22:59:46,507 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38139,1685228324316 already deleted, retry=false 2023-05-27 22:59:46,507 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38139,1685228324316 expired; onlineServers=0 2023-05-27 22:59:46,507 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,40691,1685228324276' ***** 2023-05-27 22:59:46,507 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 22:59:46,507 DEBUG [M:0;jenkins-hbase4:40691] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ba9507c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:59:46,507 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:59:46,507 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40691,1685228324276; all regions closed. 2023-05-27 22:59:46,507 DEBUG [M:0;jenkins-hbase4:40691] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 22:59:46,507 DEBUG [M:0;jenkins-hbase4:40691] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 22:59:46,507 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 22:59:46,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228324912] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228324912,5,FailOnTimeoutGroup] 2023-05-27 22:59:46,507 DEBUG [M:0;jenkins-hbase4:40691] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 22:59:46,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228324913] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228324913,5,FailOnTimeoutGroup] 2023-05-27 22:59:46,508 INFO [M:0;jenkins-hbase4:40691] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 22:59:46,509 INFO [M:0;jenkins-hbase4:40691] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 22:59:46,509 INFO [M:0;jenkins-hbase4:40691] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 22:59:46,509 DEBUG [M:0;jenkins-hbase4:40691] master.HMaster(1512): Stopping service threads 2023-05-27 22:59:46,509 INFO [M:0;jenkins-hbase4:40691] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 22:59:46,509 ERROR [M:0;jenkins-hbase4:40691] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 22:59:46,509 INFO [M:0;jenkins-hbase4:40691] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 22:59:46,509 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 22:59:46,510 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 22:59:46,510 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:46,510 DEBUG [M:0;jenkins-hbase4:40691] zookeeper.ZKUtil(398): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 22:59:46,510 WARN [M:0;jenkins-hbase4:40691] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 22:59:46,510 INFO [M:0;jenkins-hbase4:40691] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 22:59:46,510 INFO [M:0;jenkins-hbase4:40691] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 22:59:46,510 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:59:46,511 DEBUG [M:0;jenkins-hbase4:40691] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:59:46,511 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:46,511 DEBUG [M:0;jenkins-hbase4:40691] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:46,511 DEBUG [M:0;jenkins-hbase4:40691] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:59:46,511 DEBUG [M:0;jenkins-hbase4:40691] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:46,511 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.90 KB heapSize=47.33 KB 2023-05-27 22:59:46,526 INFO [M:0;jenkins-hbase4:40691] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.90 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/53ad296af4cb458a8c9d367c3f3c6a18 2023-05-27 22:59:46,532 INFO [M:0;jenkins-hbase4:40691] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53ad296af4cb458a8c9d367c3f3c6a18 2023-05-27 22:59:46,533 DEBUG [M:0;jenkins-hbase4:40691] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/53ad296af4cb458a8c9d367c3f3c6a18 as hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/53ad296af4cb458a8c9d367c3f3c6a18 2023-05-27 22:59:46,538 INFO [M:0;jenkins-hbase4:40691] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53ad296af4cb458a8c9d367c3f3c6a18 2023-05-27 22:59:46,538 INFO [M:0;jenkins-hbase4:40691] regionserver.HStore(1080): Added hdfs://localhost:45583/user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/53ad296af4cb458a8c9d367c3f3c6a18, entries=11, sequenceid=100, filesize=6.1 K 2023-05-27 22:59:46,539 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegion(2948): Finished flush of dataSize ~38.90 KB/39836, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=100, compaction requested=false 2023-05-27 22:59:46,540 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:46,540 DEBUG [M:0;jenkins-hbase4:40691] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:59:46,541 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/ae6afb51-27b7-c70a-03fc-41350be1fd98/MasterData/WALs/jenkins-hbase4.apache.org,40691,1685228324276 2023-05-27 22:59:46,544 INFO [M:0;jenkins-hbase4:40691] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 22:59:46,544 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 22:59:46,545 INFO [M:0;jenkins-hbase4:40691] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40691 2023-05-27 22:59:46,546 DEBUG [M:0;jenkins-hbase4:40691] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40691,1685228324276 already deleted, retry=false 2023-05-27 22:59:46,606 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:59:46,606 INFO [RS:0;jenkins-hbase4:38139] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38139,1685228324316; zookeeper connection closed. 2023-05-27 22:59:46,606 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): regionserver:38139-0x1006ede08a70001, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:59:46,606 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@320ffb06] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@320ffb06 2023-05-27 22:59:46,606 INFO [Listener at localhost/36811] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 22:59:46,706 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:59:46,706 INFO [M:0;jenkins-hbase4:40691] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40691,1685228324276; zookeeper connection closed. 2023-05-27 22:59:46,706 DEBUG [Listener at localhost/36811-EventThread] zookeeper.ZKWatcher(600): master:40691-0x1006ede08a70000, quorum=127.0.0.1:54484, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 22:59:46,707 WARN [Listener at localhost/36811] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:59:46,710 INFO [Listener at localhost/36811] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:59:46,815 WARN [BP-1533089643-172.31.14.131-1685228323738 heartbeating to localhost/127.0.0.1:45583] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:59:46,815 WARN [BP-1533089643-172.31.14.131-1685228323738 heartbeating to localhost/127.0.0.1:45583] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1533089643-172.31.14.131-1685228323738 (Datanode Uuid 27c5eb3d-0791-41b3-8475-1ffe4c25e24a) service to localhost/127.0.0.1:45583 2023-05-27 22:59:46,815 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/dfs/data/data3/current/BP-1533089643-172.31.14.131-1685228323738] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:59:46,815 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/dfs/data/data4/current/BP-1533089643-172.31.14.131-1685228323738] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:59:46,816 WARN [Listener at localhost/36811] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 22:59:46,819 INFO [Listener at localhost/36811] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:59:46,923 WARN [BP-1533089643-172.31.14.131-1685228323738 heartbeating to localhost/127.0.0.1:45583] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 22:59:46,923 WARN [BP-1533089643-172.31.14.131-1685228323738 heartbeating to localhost/127.0.0.1:45583] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1533089643-172.31.14.131-1685228323738 (Datanode Uuid bcd28621-0f6f-4ce9-a51f-4c55bdd17df3) service to localhost/127.0.0.1:45583 2023-05-27 22:59:46,924 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/dfs/data/data1/current/BP-1533089643-172.31.14.131-1685228323738] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:59:46,925 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/cluster_c1cf9527-eefb-427a-6d6e-a5bf485b1df4/dfs/data/data2/current/BP-1533089643-172.31.14.131-1685228323738] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 22:59:46,937 INFO [Listener at localhost/36811] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 22:59:46,980 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 22:59:47,049 INFO [Listener at localhost/36811] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 22:59:47,075 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 22:59:47,085 INFO [Listener at localhost/36811] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=95 (was 87) - Thread LEAK? -, OpenFileDescriptor=498 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=56 (was 34) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 168) - ProcessCount LEAK? -, AvailableMemoryMB=3523 (was 3748) 2023-05-27 22:59:47,094 INFO [Listener at localhost/36811] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=96, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=56, ProcessCount=172, AvailableMemoryMB=3523 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/hadoop.log.dir so I do NOT create it in target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/96a6ae90-4522-3e49-b8fc-c426a9ca4d25/hadoop.tmp.dir so I do NOT create it in target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8, deleteOnExit=true 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/test.cache.data in system properties and HBase conf 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/hadoop.log.dir in system properties and HBase conf 2023-05-27 22:59:47,095 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 22:59:47,096 DEBUG [Listener at localhost/36811] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 22:59:47,096 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/nfs.dump.dir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/java.io.tmpdir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 22:59:47,097 INFO [Listener at localhost/36811] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 22:59:47,098 WARN [Listener at localhost/36811] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:59:47,101 WARN [Listener at localhost/36811] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:59:47,101 WARN [Listener at localhost/36811] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:59:47,139 WARN [Listener at localhost/36811] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:59:47,141 INFO [Listener at localhost/36811] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:59:47,145 INFO [Listener at localhost/36811] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/java.io.tmpdir/Jetty_localhost_41855_hdfs____.1m1rmj/webapp 2023-05-27 22:59:47,235 INFO [Listener at localhost/36811] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41855 2023-05-27 22:59:47,236 WARN [Listener at localhost/36811] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 22:59:47,239 WARN [Listener at localhost/36811] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 22:59:47,239 WARN [Listener at localhost/36811] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 22:59:47,279 WARN [Listener at localhost/33271] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:59:47,291 WARN [Listener at localhost/33271] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:59:47,295 WARN [Listener at localhost/33271] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:59:47,296 INFO [Listener at localhost/33271] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:59:47,302 INFO [Listener at localhost/33271] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/java.io.tmpdir/Jetty_localhost_42337_datanode____bzzz7k/webapp 2023-05-27 22:59:47,408 INFO [Listener at localhost/33271] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42337 2023-05-27 22:59:47,414 WARN [Listener at localhost/39053] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:59:47,428 WARN [Listener at localhost/39053] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 22:59:47,431 WARN [Listener at localhost/39053] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 22:59:47,433 INFO [Listener at localhost/39053] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 22:59:47,437 INFO [Listener at localhost/39053] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/java.io.tmpdir/Jetty_localhost_44927_datanode____b073xh/webapp 2023-05-27 22:59:47,503 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4664182b0258d37: Processing first storage report for DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d from datanode 126cb2fd-6373-4225-94c5-7c97f19760dd 2023-05-27 22:59:47,504 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4664182b0258d37: from storage DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d node DatanodeRegistration(127.0.0.1:44837, datanodeUuid=126cb2fd-6373-4225-94c5-7c97f19760dd, infoPort=33915, infoSecurePort=0, ipcPort=39053, storageInfo=lv=-57;cid=testClusterID;nsid=297581482;c=1685228387104), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-27 22:59:47,504 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4664182b0258d37: Processing first storage report for DS-c5529a98-36fc-44fc-81ab-5bc0820c1c4f from datanode 126cb2fd-6373-4225-94c5-7c97f19760dd 2023-05-27 22:59:47,504 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4664182b0258d37: from storage DS-c5529a98-36fc-44fc-81ab-5bc0820c1c4f node DatanodeRegistration(127.0.0.1:44837, datanodeUuid=126cb2fd-6373-4225-94c5-7c97f19760dd, infoPort=33915, infoSecurePort=0, ipcPort=39053, storageInfo=lv=-57;cid=testClusterID;nsid=297581482;c=1685228387104), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:59:47,532 INFO [Listener at localhost/39053] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44927 2023-05-27 22:59:47,538 WARN [Listener at localhost/34663] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 22:59:47,622 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x35e26680499bd57e: Processing first storage report for DS-6c04fc1d-591a-4746-931f-11b32c4d6b59 from datanode 34496a20-6b34-46f1-b265-34bac389eda9 2023-05-27 22:59:47,622 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x35e26680499bd57e: from storage DS-6c04fc1d-591a-4746-931f-11b32c4d6b59 node DatanodeRegistration(127.0.0.1:36449, datanodeUuid=34496a20-6b34-46f1-b265-34bac389eda9, infoPort=42861, infoSecurePort=0, ipcPort=34663, storageInfo=lv=-57;cid=testClusterID;nsid=297581482;c=1685228387104), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:59:47,622 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x35e26680499bd57e: Processing first storage report for DS-9836e72a-3e3a-4f1b-b8e4-d79785e19172 from datanode 34496a20-6b34-46f1-b265-34bac389eda9 2023-05-27 22:59:47,622 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x35e26680499bd57e: from storage DS-9836e72a-3e3a-4f1b-b8e4-d79785e19172 node DatanodeRegistration(127.0.0.1:36449, datanodeUuid=34496a20-6b34-46f1-b265-34bac389eda9, infoPort=42861, infoSecurePort=0, ipcPort=34663, storageInfo=lv=-57;cid=testClusterID;nsid=297581482;c=1685228387104), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 22:59:47,644 DEBUG [Listener at localhost/34663] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9 2023-05-27 22:59:47,646 INFO [Listener at localhost/34663] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/zookeeper_0, clientPort=54987, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 22:59:47,647 INFO [Listener at localhost/34663] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54987 2023-05-27 22:59:47,647 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,648 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,662 INFO [Listener at localhost/34663] util.FSUtils(471): Created version file at hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147 with version=8 2023-05-27 22:59:47,662 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase-staging 2023-05-27 22:59:47,663 INFO [Listener at localhost/34663] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:59:47,664 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:59:47,664 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:59:47,664 INFO [Listener at localhost/34663] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:59:47,664 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:59:47,664 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:59:47,664 INFO [Listener at localhost/34663] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:59:47,665 INFO [Listener at localhost/34663] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44157 2023-05-27 22:59:47,665 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,666 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,667 INFO [Listener at localhost/34663] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44157 connecting to ZooKeeper ensemble=127.0.0.1:54987 2023-05-27 22:59:47,673 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:441570x0, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:59:47,674 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44157-0x1006edf00400000 connected 2023-05-27 22:59:47,688 DEBUG [Listener at localhost/34663] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:59:47,689 DEBUG [Listener at localhost/34663] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:59:47,689 DEBUG [Listener at localhost/34663] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:59:47,689 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44157 2023-05-27 22:59:47,690 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44157 2023-05-27 22:59:47,690 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44157 2023-05-27 22:59:47,690 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44157 2023-05-27 22:59:47,690 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44157 2023-05-27 22:59:47,691 INFO [Listener at localhost/34663] master.HMaster(444): hbase.rootdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147, hbase.cluster.distributed=false 2023-05-27 22:59:47,703 INFO [Listener at localhost/34663] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 22:59:47,703 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:59:47,703 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 22:59:47,703 INFO [Listener at localhost/34663] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 22:59:47,704 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 22:59:47,704 INFO [Listener at localhost/34663] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 22:59:47,704 INFO [Listener at localhost/34663] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 22:59:47,705 INFO [Listener at localhost/34663] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32987 2023-05-27 22:59:47,705 INFO [Listener at localhost/34663] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 22:59:47,706 DEBUG [Listener at localhost/34663] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 22:59:47,706 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,707 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,708 INFO [Listener at localhost/34663] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32987 connecting to ZooKeeper ensemble=127.0.0.1:54987 2023-05-27 22:59:47,711 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:329870x0, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 22:59:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32987-0x1006edf00400001 connected 2023-05-27 22:59:47,712 DEBUG [Listener at localhost/34663] zookeeper.ZKUtil(164): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 22:59:47,713 DEBUG [Listener at localhost/34663] zookeeper.ZKUtil(164): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 22:59:47,713 DEBUG [Listener at localhost/34663] zookeeper.ZKUtil(164): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 22:59:47,717 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32987 2023-05-27 22:59:47,717 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32987 2023-05-27 22:59:47,717 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32987 2023-05-27 22:59:47,718 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32987 2023-05-27 22:59:47,718 DEBUG [Listener at localhost/34663] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32987 2023-05-27 22:59:47,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,720 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:59:47,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,722 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:59:47,722 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 22:59:47,722 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:47,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:59:47,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 22:59:47,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44157,1685228387663 from backup master directory 2023-05-27 22:59:47,725 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,725 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 22:59:47,726 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:59:47,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/hbase.id with ID: 783102d1-b31d-401f-ab60-99e266ee92d5 2023-05-27 22:59:47,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:47,750 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:47,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x794ce8fe to 127.0.0.1:54987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:59:47,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1528cb7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:59:47,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:59:47,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 22:59:47,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:59:47,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store-tmp 2023-05-27 22:59:47,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:47,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 22:59:47,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:47,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:47,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 22:59:47,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:47,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 22:59:47,773 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:59:47,773 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/WALs/jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44157%2C1685228387663, suffix=, logDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/WALs/jenkins-hbase4.apache.org,44157,1685228387663, archiveDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/oldWALs, maxLogs=10 2023-05-27 22:59:47,780 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/WALs/jenkins-hbase4.apache.org,44157,1685228387663/jenkins-hbase4.apache.org%2C44157%2C1685228387663.1685228387775 2023-05-27 22:59:47,780 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44837,DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d,DISK], DatanodeInfoWithStorage[127.0.0.1:36449,DS-6c04fc1d-591a-4746-931f-11b32c4d6b59,DISK]] 2023-05-27 22:59:47,780 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:59:47,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:47,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:59:47,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:59:47,782 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:59:47,783 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 22:59:47,783 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 22:59:47,784 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:47,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:59:47,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:59:47,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 22:59:47,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:59:47,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=706972, jitterRate=-0.10103951394557953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:59:47,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 22:59:47,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 22:59:47,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 22:59:47,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 22:59:47,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 22:59:47,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 22:59:47,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 22:59:47,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 22:59:47,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 22:59:47,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 22:59:47,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 22:59:47,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 22:59:47,805 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 22:59:47,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 22:59:47,805 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 22:59:47,807 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:47,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 22:59:47,808 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 22:59:47,808 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 22:59:47,811 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:59:47,811 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 22:59:47,811 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:47,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44157,1685228387663, sessionid=0x1006edf00400000, setting cluster-up flag (Was=false) 2023-05-27 22:59:47,814 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:47,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 22:59:47,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,822 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:47,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 22:59:47,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:47,827 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.hbase-snapshot/.tmp 2023-05-27 22:59:47,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 22:59:47,829 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:59:47,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685228417831 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 22:59:47,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 22:59:47,832 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:59:47,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 22:59:47,832 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 22:59:47,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 22:59:47,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 22:59:47,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 22:59:47,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228387832,5,FailOnTimeoutGroup] 2023-05-27 22:59:47,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228387833,5,FailOnTimeoutGroup] 2023-05-27 22:59:47,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 22:59:47,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,833 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:59:47,841 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:59:47,842 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 22:59:47,842 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147 2023-05-27 22:59:47,849 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:47,850 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:59:47,851 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info 2023-05-27 22:59:47,851 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:59:47,852 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:47,852 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:59:47,853 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:59:47,853 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:59:47,854 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:47,854 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:59:47,855 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/table 2023-05-27 22:59:47,855 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:59:47,856 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:47,856 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740 2023-05-27 22:59:47,857 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740 2023-05-27 22:59:47,859 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:59:47,860 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:59:47,862 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:59:47,862 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=862360, jitterRate=0.09654861688613892}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:59:47,862 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:59:47,862 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 22:59:47,862 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 22:59:47,862 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 22:59:47,862 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 22:59:47,862 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 22:59:47,863 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 22:59:47,863 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 22:59:47,864 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 22:59:47,864 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 22:59:47,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 22:59:47,865 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 22:59:47,867 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 22:59:47,920 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(951): ClusterId : 783102d1-b31d-401f-ab60-99e266ee92d5 2023-05-27 22:59:47,920 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 22:59:47,922 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 22:59:47,923 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 22:59:47,926 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 22:59:47,927 DEBUG [RS:0;jenkins-hbase4:32987] zookeeper.ReadOnlyZKClient(139): Connect 0x08644503 to 127.0.0.1:54987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:59:47,930 DEBUG [RS:0;jenkins-hbase4:32987] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a3aa7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:59:47,930 DEBUG [RS:0;jenkins-hbase4:32987] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29c2e73, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 22:59:47,938 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:32987 2023-05-27 22:59:47,938 INFO [RS:0;jenkins-hbase4:32987] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 22:59:47,939 INFO [RS:0;jenkins-hbase4:32987] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 22:59:47,939 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 22:59:47,939 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44157,1685228387663 with isa=jenkins-hbase4.apache.org/172.31.14.131:32987, startcode=1685228387703 2023-05-27 22:59:47,939 DEBUG [RS:0;jenkins-hbase4:32987] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 22:59:47,942 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54515, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 22:59:47,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:47,943 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147 2023-05-27 22:59:47,943 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33271 2023-05-27 22:59:47,943 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 22:59:47,945 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 22:59:47,945 DEBUG [RS:0;jenkins-hbase4:32987] zookeeper.ZKUtil(162): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:47,945 WARN [RS:0;jenkins-hbase4:32987] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 22:59:47,945 INFO [RS:0;jenkins-hbase4:32987] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:59:47,945 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1946): logDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:47,946 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32987,1685228387703] 2023-05-27 22:59:47,950 DEBUG [RS:0;jenkins-hbase4:32987] zookeeper.ZKUtil(162): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:47,951 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 22:59:47,951 INFO [RS:0;jenkins-hbase4:32987] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 22:59:47,952 INFO [RS:0;jenkins-hbase4:32987] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 22:59:47,952 INFO [RS:0;jenkins-hbase4:32987] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 22:59:47,953 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,954 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 22:59:47,955 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,955 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,955 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,955 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,956 DEBUG [RS:0;jenkins-hbase4:32987] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 22:59:47,957 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,957 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,957 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,968 INFO [RS:0;jenkins-hbase4:32987] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 22:59:47,968 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32987,1685228387703-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:47,978 INFO [RS:0;jenkins-hbase4:32987] regionserver.Replication(203): jenkins-hbase4.apache.org,32987,1685228387703 started 2023-05-27 22:59:47,978 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32987,1685228387703, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32987, sessionid=0x1006edf00400001 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32987,1685228387703' 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 22:59:47,978 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 22:59:47,979 DEBUG [RS:0;jenkins-hbase4:32987] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:47,979 DEBUG [RS:0;jenkins-hbase4:32987] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32987,1685228387703' 2023-05-27 22:59:47,979 DEBUG [RS:0;jenkins-hbase4:32987] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 22:59:47,979 DEBUG [RS:0;jenkins-hbase4:32987] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 22:59:47,979 DEBUG [RS:0;jenkins-hbase4:32987] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 22:59:47,979 INFO [RS:0;jenkins-hbase4:32987] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 22:59:47,979 INFO [RS:0;jenkins-hbase4:32987] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 22:59:48,017 DEBUG [jenkins-hbase4:44157] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 22:59:48,018 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32987,1685228387703, state=OPENING 2023-05-27 22:59:48,019 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 22:59:48,021 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:48,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32987,1685228387703}] 2023-05-27 22:59:48,021 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:59:48,081 INFO [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32987%2C1685228387703, suffix=, logDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703, archiveDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/oldWALs, maxLogs=32 2023-05-27 22:59:48,091 INFO [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228388081 2023-05-27 22:59:48,091 DEBUG [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44837,DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d,DISK], DatanodeInfoWithStorage[127.0.0.1:36449,DS-6c04fc1d-591a-4746-931f-11b32c4d6b59,DISK]] 2023-05-27 22:59:48,174 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:48,175 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 22:59:48,178 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36230, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 22:59:48,181 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 22:59:48,181 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 22:59:48,183 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32987%2C1685228387703.meta, suffix=.meta, logDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703, archiveDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/oldWALs, maxLogs=32 2023-05-27 22:59:48,190 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.meta.1685228388183.meta 2023-05-27 22:59:48,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44837,DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d,DISK], DatanodeInfoWithStorage[127.0.0.1:36449,DS-6c04fc1d-591a-4746-931f-11b32c4d6b59,DISK]] 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 22:59:48,191 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 22:59:48,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 22:59:48,192 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 22:59:48,193 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info 2023-05-27 22:59:48,193 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info 2023-05-27 22:59:48,194 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 22:59:48,194 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:48,194 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 22:59:48,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:59:48,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/rep_barrier 2023-05-27 22:59:48,195 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 22:59:48,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:48,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 22:59:48,197 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/table 2023-05-27 22:59:48,197 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/table 2023-05-27 22:59:48,197 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 22:59:48,198 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:48,198 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740 2023-05-27 22:59:48,199 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740 2023-05-27 22:59:48,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 22:59:48,203 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 22:59:48,204 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=814877, jitterRate=0.03617081046104431}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 22:59:48,204 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 22:59:48,206 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685228388174 2023-05-27 22:59:48,209 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 22:59:48,210 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 22:59:48,210 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32987,1685228387703, state=OPEN 2023-05-27 22:59:48,212 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 22:59:48,212 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 22:59:48,215 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 22:59:48,215 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32987,1685228387703 in 191 msec 2023-05-27 22:59:48,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 22:59:48,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 351 msec 2023-05-27 22:59:48,220 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 390 msec 2023-05-27 22:59:48,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685228388220, completionTime=-1 2023-05-27 22:59:48,220 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 22:59:48,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 22:59:48,223 DEBUG [hconnection-0x1c018a3d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:59:48,225 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36236, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:59:48,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 22:59:48,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685228448227 2023-05-27 22:59:48,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685228508227 2023-05-27 22:59:48,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44157,1685228387663-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44157,1685228387663-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44157,1685228387663-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44157, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 22:59:48,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 22:59:48,234 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 22:59:48,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 22:59:48,236 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:59:48,236 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:59:48,238 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,239 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f empty. 2023-05-27 22:59:48,239 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,239 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 22:59:48,249 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 22:59:48,251 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e3a303c7ef932c4f2db8ca76b3c5e69f, NAME => 'hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp 2023-05-27 22:59:48,257 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:48,257 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e3a303c7ef932c4f2db8ca76b3c5e69f, disabling compactions & flushes 2023-05-27 22:59:48,257 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,257 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,257 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. after waiting 0 ms 2023-05-27 22:59:48,258 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,258 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,258 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e3a303c7ef932c4f2db8ca76b3c5e69f: 2023-05-27 22:59:48,260 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:59:48,261 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228388260"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228388260"}]},"ts":"1685228388260"} 2023-05-27 22:59:48,263 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:59:48,263 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:59:48,264 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228388264"}]},"ts":"1685228388264"} 2023-05-27 22:59:48,265 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 22:59:48,271 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e3a303c7ef932c4f2db8ca76b3c5e69f, ASSIGN}] 2023-05-27 22:59:48,272 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e3a303c7ef932c4f2db8ca76b3c5e69f, ASSIGN 2023-05-27 22:59:48,273 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e3a303c7ef932c4f2db8ca76b3c5e69f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32987,1685228387703; forceNewPlan=false, retain=false 2023-05-27 22:59:48,424 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e3a303c7ef932c4f2db8ca76b3c5e69f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:48,425 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228388424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228388424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228388424"}]},"ts":"1685228388424"} 2023-05-27 22:59:48,427 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e3a303c7ef932c4f2db8ca76b3c5e69f, server=jenkins-hbase4.apache.org,32987,1685228387703}] 2023-05-27 22:59:48,583 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3a303c7ef932c4f2db8ca76b3c5e69f, NAME => 'hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:59:48,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:48,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,588 INFO [StoreOpener-e3a303c7ef932c4f2db8ca76b3c5e69f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,590 DEBUG [StoreOpener-e3a303c7ef932c4f2db8ca76b3c5e69f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/info 2023-05-27 22:59:48,590 DEBUG [StoreOpener-e3a303c7ef932c4f2db8ca76b3c5e69f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/info 2023-05-27 22:59:48,590 INFO [StoreOpener-e3a303c7ef932c4f2db8ca76b3c5e69f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3a303c7ef932c4f2db8ca76b3c5e69f columnFamilyName info 2023-05-27 22:59:48,591 INFO [StoreOpener-e3a303c7ef932c4f2db8ca76b3c5e69f-1] regionserver.HStore(310): Store=e3a303c7ef932c4f2db8ca76b3c5e69f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:48,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,595 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 22:59:48,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:59:48,598 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e3a303c7ef932c4f2db8ca76b3c5e69f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=704110, jitterRate=-0.1046781837940216}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:59:48,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e3a303c7ef932c4f2db8ca76b3c5e69f: 2023-05-27 22:59:48,600 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f., pid=6, masterSystemTime=1685228388579 2023-05-27 22:59:48,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 22:59:48,603 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e3a303c7ef932c4f2db8ca76b3c5e69f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:48,603 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228388603"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228388603"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228388603"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228388603"}]},"ts":"1685228388603"} 2023-05-27 22:59:48,606 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 22:59:48,606 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e3a303c7ef932c4f2db8ca76b3c5e69f, server=jenkins-hbase4.apache.org,32987,1685228387703 in 177 msec 2023-05-27 22:59:48,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 22:59:48,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e3a303c7ef932c4f2db8ca76b3c5e69f, ASSIGN in 335 msec 2023-05-27 22:59:48,609 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:59:48,609 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228388609"}]},"ts":"1685228388609"} 2023-05-27 22:59:48,610 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 22:59:48,613 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:59:48,614 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 380 msec 2023-05-27 22:59:48,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 22:59:48,637 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:59:48,637 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:48,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 22:59:48,649 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:59:48,652 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-05-27 22:59:48,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 22:59:48,670 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 22:59:48,674 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-27 22:59:48,687 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 22:59:48,690 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 22:59:48,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.964sec 2023-05-27 22:59:48,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 22:59:48,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 22:59:48,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 22:59:48,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44157,1685228387663-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 22:59:48,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44157,1685228387663-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 22:59:48,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 22:59:48,721 DEBUG [Listener at localhost/34663] zookeeper.ReadOnlyZKClient(139): Connect 0x5877aab8 to 127.0.0.1:54987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 22:59:48,724 DEBUG [Listener at localhost/34663] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15d5a7cb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 22:59:48,726 DEBUG [hconnection-0x2003387-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 22:59:48,727 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36240, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 22:59:48,729 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 22:59:48,729 INFO [Listener at localhost/34663] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 22:59:48,732 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 22:59:48,732 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 22:59:48,732 INFO [Listener at localhost/34663] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 22:59:48,734 DEBUG [Listener at localhost/34663] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-27 22:59:48,736 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51180, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-27 22:59:48,738 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-27 22:59:48,738 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-27 22:59:48,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 22:59:48,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-27 22:59:48,742 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 22:59:48,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-27 22:59:48,743 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 22:59:48,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:59:48,746 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:48,747 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72 empty. 2023-05-27 22:59:48,747 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:48,747 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-27 22:59:48,756 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-27 22:59:48,757 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5284852e3c6fe0fc659026b96f907d72, NAME => 'TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/.tmp 2023-05-27 22:59:48,765 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:48,766 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 5284852e3c6fe0fc659026b96f907d72, disabling compactions & flushes 2023-05-27 22:59:48,766 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:48,766 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:48,766 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. after waiting 0 ms 2023-05-27 22:59:48,766 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:48,766 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:48,766 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 22:59:48,768 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 22:59:48,769 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685228388768"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228388768"}]},"ts":"1685228388768"} 2023-05-27 22:59:48,770 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 22:59:48,771 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 22:59:48,771 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228388771"}]},"ts":"1685228388771"} 2023-05-27 22:59:48,772 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-27 22:59:48,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, ASSIGN}] 2023-05-27 22:59:48,778 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, ASSIGN 2023-05-27 22:59:48,778 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32987,1685228387703; forceNewPlan=false, retain=false 2023-05-27 22:59:48,929 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5284852e3c6fe0fc659026b96f907d72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:48,929 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685228388929"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228388929"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228388929"}]},"ts":"1685228388929"} 2023-05-27 22:59:48,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 5284852e3c6fe0fc659026b96f907d72, server=jenkins-hbase4.apache.org,32987,1685228387703}] 2023-05-27 22:59:49,087 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:49,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5284852e3c6fe0fc659026b96f907d72, NAME => 'TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.', STARTKEY => '', ENDKEY => ''} 2023-05-27 22:59:49,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 22:59:49,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,088 INFO [StoreOpener-5284852e3c6fe0fc659026b96f907d72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,090 DEBUG [StoreOpener-5284852e3c6fe0fc659026b96f907d72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info 2023-05-27 22:59:49,090 DEBUG [StoreOpener-5284852e3c6fe0fc659026b96f907d72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info 2023-05-27 22:59:49,090 INFO [StoreOpener-5284852e3c6fe0fc659026b96f907d72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5284852e3c6fe0fc659026b96f907d72 columnFamilyName info 2023-05-27 22:59:49,090 INFO [StoreOpener-5284852e3c6fe0fc659026b96f907d72-1] regionserver.HStore(310): Store=5284852e3c6fe0fc659026b96f907d72/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 22:59:49,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:49,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 22:59:49,096 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5284852e3c6fe0fc659026b96f907d72; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=777917, jitterRate=-0.010828465223312378}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 22:59:49,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 22:59:49,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72., pid=11, masterSystemTime=1685228389083 2023-05-27 22:59:49,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:49,098 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:49,099 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5284852e3c6fe0fc659026b96f907d72, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 22:59:49,099 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685228389099"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228389099"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228389099"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228389099"}]},"ts":"1685228389099"} 2023-05-27 22:59:49,103 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-27 22:59:49,103 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 5284852e3c6fe0fc659026b96f907d72, server=jenkins-hbase4.apache.org,32987,1685228387703 in 170 msec 2023-05-27 22:59:49,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-27 22:59:49,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, ASSIGN in 327 msec 2023-05-27 22:59:49,106 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 22:59:49,106 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228389106"}]},"ts":"1685228389106"} 2023-05-27 22:59:49,107 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-27 22:59:49,109 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 22:59:49,110 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 371 msec 2023-05-27 22:59:51,896 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 22:59:53,951 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 22:59:53,951 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 22:59:53,952 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-27 22:59:58,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44157] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-27 22:59:58,745 INFO [Listener at localhost/34663] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-27 22:59:58,747 DEBUG [Listener at localhost/34663] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-27 22:59:58,747 DEBUG [Listener at localhost/34663] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 22:59:58,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:58,759 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5284852e3c6fe0fc659026b96f907d72 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 22:59:58,769 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/0f314fb39a034c2baf460ab656a587f8 2023-05-27 22:59:58,777 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/0f314fb39a034c2baf460ab656a587f8 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/0f314fb39a034c2baf460ab656a587f8 2023-05-27 22:59:58,783 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/0f314fb39a034c2baf460ab656a587f8, entries=7, sequenceid=11, filesize=12.1 K 2023-05-27 22:59:58,784 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 5284852e3c6fe0fc659026b96f907d72 in 25ms, sequenceid=11, compaction requested=false 2023-05-27 22:59:58,784 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 22:59:58,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 5284852e3c6fe0fc659026b96f907d72 2023-05-27 22:59:58,785 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5284852e3c6fe0fc659026b96f907d72 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-27 22:59:58,794 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=34 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/7014de67410848729052da96ec609a1d 2023-05-27 22:59:58,800 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/7014de67410848729052da96ec609a1d as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d 2023-05-27 22:59:58,804 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d, entries=20, sequenceid=34, filesize=25.8 K 2023-05-27 22:59:58,805 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 5284852e3c6fe0fc659026b96f907d72 in 20ms, sequenceid=34, compaction requested=false 2023-05-27 22:59:58,805 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 22:59:58,805 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=37.9 K, sizeToCheck=16.0 K 2023-05-27 22:59:58,805 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 22:59:58,805 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d because midkey is the same as first or last row 2023-05-27 23:00:00,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:00,795 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5284852e3c6fe0fc659026b96f907d72 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:00:00,807 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=44 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/f1e8e93244cc4680adf712443eca11c0 2023-05-27 23:00:00,812 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/f1e8e93244cc4680adf712443eca11c0 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/f1e8e93244cc4680adf712443eca11c0 2023-05-27 23:00:00,818 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/f1e8e93244cc4680adf712443eca11c0, entries=7, sequenceid=44, filesize=12.1 K 2023-05-27 23:00:00,819 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 5284852e3c6fe0fc659026b96f907d72 in 24ms, sequenceid=44, compaction requested=true 2023-05-27 23:00:00,819 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 23:00:00,819 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=50.0 K, sizeToCheck=16.0 K 2023-05-27 23:00:00,819 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 23:00:00,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:00,819 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d because midkey is the same as first or last row 2023-05-27 23:00:00,819 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:00,819 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:00,820 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5284852e3c6fe0fc659026b96f907d72 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-27 23:00:00,821 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 51218 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:00,822 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 5284852e3c6fe0fc659026b96f907d72/info is initiating minor compaction (all files) 2023-05-27 23:00:00,822 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5284852e3c6fe0fc659026b96f907d72/info in TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 23:00:00,822 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/0f314fb39a034c2baf460ab656a587f8, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/f1e8e93244cc4680adf712443eca11c0] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp, totalSize=50.0 K 2023-05-27 23:00:00,822 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 0f314fb39a034c2baf460ab656a587f8, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685228398750 2023-05-27 23:00:00,823 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 7014de67410848729052da96ec609a1d, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=34, earliestPutTs=1685228398759 2023-05-27 23:00:00,824 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting f1e8e93244cc4680adf712443eca11c0, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1685228398786 2023-05-27 23:00:00,833 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=67 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/06108b1c5366418e85dfd668ca7b6f60 2023-05-27 23:00:00,835 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5284852e3c6fe0fc659026b96f907d72, server=jenkins-hbase4.apache.org,32987,1685228387703 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 23:00:00,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] ipc.CallRunner(144): callId: 72 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36240 deadline: 1685228410835, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5284852e3c6fe0fc659026b96f907d72, server=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:00,839 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5284852e3c6fe0fc659026b96f907d72#info#compaction#29 average throughput is 17.44 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:00,840 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/06108b1c5366418e85dfd668ca7b6f60 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/06108b1c5366418e85dfd668ca7b6f60 2023-05-27 23:00:00,853 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/06108b1c5366418e85dfd668ca7b6f60, entries=20, sequenceid=67, filesize=25.8 K 2023-05-27 23:00:00,854 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 5284852e3c6fe0fc659026b96f907d72 in 35ms, sequenceid=67, compaction requested=false 2023-05-27 23:00:00,854 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 23:00:00,854 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=75.8 K, sizeToCheck=16.0 K 2023-05-27 23:00:00,854 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 23:00:00,854 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d because midkey is the same as first or last row 2023-05-27 23:00:00,857 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/6835d9074eae4773ba1c566b73e40219 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219 2023-05-27 23:00:00,863 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5284852e3c6fe0fc659026b96f907d72/info of 5284852e3c6fe0fc659026b96f907d72 into 6835d9074eae4773ba1c566b73e40219(size=40.7 K), total size for store is 66.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:00,863 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 23:00:00,863 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72., storeName=5284852e3c6fe0fc659026b96f907d72/info, priority=13, startTime=1685228400819; duration=0sec 2023-05-27 23:00:00,863 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=66.5 K, sizeToCheck=16.0 K 2023-05-27 23:00:00,863 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 23:00:00,863 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219 because midkey is the same as first or last row 2023-05-27 23:00:00,863 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:10,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:10,934 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5284852e3c6fe0fc659026b96f907d72 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-27 23:00:10,946 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=81 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/aa041e44b297476787110367a73c230d 2023-05-27 23:00:10,952 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/aa041e44b297476787110367a73c230d as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/aa041e44b297476787110367a73c230d 2023-05-27 23:00:10,956 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/aa041e44b297476787110367a73c230d, entries=10, sequenceid=81, filesize=15.3 K 2023-05-27 23:00:10,957 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for 5284852e3c6fe0fc659026b96f907d72 in 23ms, sequenceid=81, compaction requested=true 2023-05-27 23:00:10,958 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 23:00:10,958 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=81.7 K, sizeToCheck=16.0 K 2023-05-27 23:00:10,958 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 23:00:10,958 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219 because midkey is the same as first or last row 2023-05-27 23:00:10,958 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:10,958 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:10,959 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83687 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:10,959 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 5284852e3c6fe0fc659026b96f907d72/info is initiating minor compaction (all files) 2023-05-27 23:00:10,959 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5284852e3c6fe0fc659026b96f907d72/info in TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 23:00:10,959 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/06108b1c5366418e85dfd668ca7b6f60, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/aa041e44b297476787110367a73c230d] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp, totalSize=81.7 K 2023-05-27 23:00:10,959 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 6835d9074eae4773ba1c566b73e40219, keycount=34, bloomtype=ROW, size=40.7 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1685228398750 2023-05-27 23:00:10,960 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 06108b1c5366418e85dfd668ca7b6f60, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=67, earliestPutTs=1685228400795 2023-05-27 23:00:10,960 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting aa041e44b297476787110367a73c230d, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1685228400820 2023-05-27 23:00:10,970 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5284852e3c6fe0fc659026b96f907d72#info#compaction#31 average throughput is 32.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:10,984 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/.tmp/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc 2023-05-27 23:00:10,989 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5284852e3c6fe0fc659026b96f907d72/info of 5284852e3c6fe0fc659026b96f907d72 into 86bf1c5f6ddc4f9f9b7b32bd5ee30adc(size=72.5 K), total size for store is 72.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:10,989 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 23:00:10,989 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72., storeName=5284852e3c6fe0fc659026b96f907d72/info, priority=13, startTime=1685228410958; duration=0sec 2023-05-27 23:00:10,989 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.5 K, sizeToCheck=16.0 K 2023-05-27 23:00:10,989 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-27 23:00:10,990 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:10,990 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:10,991 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44157] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,32987,1685228387703, parent={ENCODED => 5284852e3c6fe0fc659026b96f907d72, NAME => 'TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-27 23:00:10,998 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44157] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:11,004 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44157] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=5284852e3c6fe0fc659026b96f907d72, daughterA=6f0e3e36d0fd48fe2fb462bffb5dcb9a, daughterB=6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,005 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=5284852e3c6fe0fc659026b96f907d72, daughterA=6f0e3e36d0fd48fe2fb462bffb5dcb9a, daughterB=6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,005 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=5284852e3c6fe0fc659026b96f907d72, daughterA=6f0e3e36d0fd48fe2fb462bffb5dcb9a, daughterB=6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,005 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=5284852e3c6fe0fc659026b96f907d72, daughterA=6f0e3e36d0fd48fe2fb462bffb5dcb9a, daughterB=6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,013 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, UNASSIGN}] 2023-05-27 23:00:11,015 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, UNASSIGN 2023-05-27 23:00:11,015 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=5284852e3c6fe0fc659026b96f907d72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:11,016 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685228411015"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228411015"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228411015"}]},"ts":"1685228411015"} 2023-05-27 23:00:11,017 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 5284852e3c6fe0fc659026b96f907d72, server=jenkins-hbase4.apache.org,32987,1685228387703}] 2023-05-27 23:00:11,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:11,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5284852e3c6fe0fc659026b96f907d72, disabling compactions & flushes 2023-05-27 23:00:11,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 23:00:11,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 23:00:11,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. after waiting 0 ms 2023-05-27 23:00:11,175 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 23:00:11,181 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/0f314fb39a034c2baf460ab656a587f8, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/f1e8e93244cc4680adf712443eca11c0, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/06108b1c5366418e85dfd668ca7b6f60, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/aa041e44b297476787110367a73c230d] to archive 2023-05-27 23:00:11,182 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 23:00:11,184 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/0f314fb39a034c2baf460ab656a587f8 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/0f314fb39a034c2baf460ab656a587f8 2023-05-27 23:00:11,185 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/7014de67410848729052da96ec609a1d 2023-05-27 23:00:11,186 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/6835d9074eae4773ba1c566b73e40219 2023-05-27 23:00:11,187 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/f1e8e93244cc4680adf712443eca11c0 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/f1e8e93244cc4680adf712443eca11c0 2023-05-27 23:00:11,189 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/06108b1c5366418e85dfd668ca7b6f60 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/06108b1c5366418e85dfd668ca7b6f60 2023-05-27 23:00:11,190 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/aa041e44b297476787110367a73c230d to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/aa041e44b297476787110367a73c230d 2023-05-27 23:00:11,195 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=1 2023-05-27 23:00:11,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. 2023-05-27 23:00:11,195 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5284852e3c6fe0fc659026b96f907d72: 2023-05-27 23:00:11,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:11,198 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=5284852e3c6fe0fc659026b96f907d72, regionState=CLOSED 2023-05-27 23:00:11,198 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685228411198"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228411198"}]},"ts":"1685228411198"} 2023-05-27 23:00:11,201 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-27 23:00:11,201 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 5284852e3c6fe0fc659026b96f907d72, server=jenkins-hbase4.apache.org,32987,1685228387703 in 182 msec 2023-05-27 23:00:11,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-27 23:00:11,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5284852e3c6fe0fc659026b96f907d72, UNASSIGN in 188 msec 2023-05-27 23:00:11,213 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 1 storefiles, region=5284852e3c6fe0fc659026b96f907d72, threads=1 2023-05-27 23:00:11,214 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc for region: 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:11,246 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc for region: 5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:00:11,246 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 5284852e3c6fe0fc659026b96f907d72 Daughter A: 1 storefiles, Daughter B: 1 storefiles. 2023-05-27 23:00:11,276 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-05-27 23:00:11,277 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-05-27 23:00:11,280 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685228411279"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685228411279"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685228411279"}]},"ts":"1685228411279"} 2023-05-27 23:00:11,280 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685228411279"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228411279"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228411279"}]},"ts":"1685228411279"} 2023-05-27 23:00:11,280 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685228411279"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228411279"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228411279"}]},"ts":"1685228411279"} 2023-05-27 23:00:11,322 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32987] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-27 23:00:11,322 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-27 23:00:11,322 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-27 23:00:11,331 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6f0e3e36d0fd48fe2fb462bffb5dcb9a, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6d400feb19af72560059bfd56c267738, ASSIGN}] 2023-05-27 23:00:11,332 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6f0e3e36d0fd48fe2fb462bffb5dcb9a, ASSIGN 2023-05-27 23:00:11,332 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6d400feb19af72560059bfd56c267738, ASSIGN 2023-05-27 23:00:11,333 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6f0e3e36d0fd48fe2fb462bffb5dcb9a, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,32987,1685228387703; forceNewPlan=false, retain=false 2023-05-27 23:00:11,333 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6d400feb19af72560059bfd56c267738, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,32987,1685228387703; forceNewPlan=false, retain=false 2023-05-27 23:00:11,334 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/.tmp/info/d454e733afb84ddc844cfacf2168911d 2023-05-27 23:00:11,346 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/.tmp/table/118ef74be38a473dba8cd80a10374872 2023-05-27 23:00:11,352 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/.tmp/info/d454e733afb84ddc844cfacf2168911d as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info/d454e733afb84ddc844cfacf2168911d 2023-05-27 23:00:11,356 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info/d454e733afb84ddc844cfacf2168911d, entries=29, sequenceid=17, filesize=8.6 K 2023-05-27 23:00:11,357 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/.tmp/table/118ef74be38a473dba8cd80a10374872 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/table/118ef74be38a473dba8cd80a10374872 2023-05-27 23:00:11,362 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/table/118ef74be38a473dba8cd80a10374872, entries=4, sequenceid=17, filesize=4.8 K 2023-05-27 23:00:11,363 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 41ms, sequenceid=17, compaction requested=false 2023-05-27 23:00:11,363 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-27 23:00:11,484 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=6f0e3e36d0fd48fe2fb462bffb5dcb9a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:11,484 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=6d400feb19af72560059bfd56c267738, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:11,485 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685228411484"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228411484"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228411484"}]},"ts":"1685228411484"} 2023-05-27 23:00:11,485 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685228411484"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228411484"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228411484"}]},"ts":"1685228411484"} 2023-05-27 23:00:11,486 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure 6f0e3e36d0fd48fe2fb462bffb5dcb9a, server=jenkins-hbase4.apache.org,32987,1685228387703}] 2023-05-27 23:00:11,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703}] 2023-05-27 23:00:11,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:11,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d400feb19af72560059bfd56c267738, NAME => 'TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-27 23:00:11,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:00:11,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,643 INFO [StoreOpener-6d400feb19af72560059bfd56c267738-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,644 DEBUG [StoreOpener-6d400feb19af72560059bfd56c267738-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info 2023-05-27 23:00:11,644 DEBUG [StoreOpener-6d400feb19af72560059bfd56c267738-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info 2023-05-27 23:00:11,644 INFO [StoreOpener-6d400feb19af72560059bfd56c267738-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d400feb19af72560059bfd56c267738 columnFamilyName info 2023-05-27 23:00:11,654 DEBUG [StoreOpener-6d400feb19af72560059bfd56c267738-1] regionserver.HStore(539): loaded hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72->hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc-top 2023-05-27 23:00:11,654 INFO [StoreOpener-6d400feb19af72560059bfd56c267738-1] regionserver.HStore(310): Store=6d400feb19af72560059bfd56c267738/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:00:11,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:11,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d400feb19af72560059bfd56c267738; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=752871, jitterRate=-0.042675942182540894}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 23:00:11,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:11,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., pid=18, masterSystemTime=1685228411638 2023-05-27 23:00:11,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:11,661 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-27 23:00:11,662 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:11,662 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:00:11,662 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:11,662 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72->hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc-top] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=72.5 K 2023-05-27 23:00:11,663 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685228398750 2023-05-27 23:00:11,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:11,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:11,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:00:11,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6f0e3e36d0fd48fe2fb462bffb5dcb9a, NAME => 'TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-27 23:00:11,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:00:11,664 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=6d400feb19af72560059bfd56c267738, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:11,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,664 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685228411663"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228411663"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228411663"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228411663"}]},"ts":"1685228411663"} 2023-05-27 23:00:11,665 INFO [StoreOpener-6f0e3e36d0fd48fe2fb462bffb5dcb9a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,666 DEBUG [StoreOpener-6f0e3e36d0fd48fe2fb462bffb5dcb9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info 2023-05-27 23:00:11,666 DEBUG [StoreOpener-6f0e3e36d0fd48fe2fb462bffb5dcb9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info 2023-05-27 23:00:11,666 INFO [StoreOpener-6f0e3e36d0fd48fe2fb462bffb5dcb9a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f0e3e36d0fd48fe2fb462bffb5dcb9a columnFamilyName info 2023-05-27 23:00:11,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-27 23:00:11,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 in 178 msec 2023-05-27 23:00:11,669 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6d400feb19af72560059bfd56c267738, ASSIGN in 336 msec 2023-05-27 23:00:11,670 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#34 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:11,678 DEBUG [StoreOpener-6f0e3e36d0fd48fe2fb462bffb5dcb9a-1] regionserver.HStore(539): loaded hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72->hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc-bottom 2023-05-27 23:00:11,678 INFO [StoreOpener-6f0e3e36d0fd48fe2fb462bffb5dcb9a-1] regionserver.HStore(310): Store=6f0e3e36d0fd48fe2fb462bffb5dcb9a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:00:11,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,682 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/115c4e96c77d455eb2999c1bc6780edf as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/115c4e96c77d455eb2999c1bc6780edf 2023-05-27 23:00:11,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:00:11,683 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6f0e3e36d0fd48fe2fb462bffb5dcb9a; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=762699, jitterRate=-0.030178070068359375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 23:00:11,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6f0e3e36d0fd48fe2fb462bffb5dcb9a: 2023-05-27 23:00:11,684 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a., pid=17, masterSystemTime=1685228411638 2023-05-27 23:00:11,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-27 23:00:11,686 DEBUG [RS:0;jenkins-hbase4:32987-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-27 23:00:11,686 INFO [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:00:11,686 DEBUG [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HStore(1912): 6f0e3e36d0fd48fe2fb462bffb5dcb9a/info is initiating minor compaction (all files) 2023-05-27 23:00:11,687 INFO [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 6f0e3e36d0fd48fe2fb462bffb5dcb9a/info in TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:00:11,687 INFO [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72->hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc-bottom] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/.tmp, totalSize=72.5 K 2023-05-27 23:00:11,687 DEBUG [RS:0;jenkins-hbase4:32987-longCompactions-0] compactions.Compactor(207): Compacting 86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1685228398750 2023-05-27 23:00:11,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:00:11,687 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:00:11,688 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=6f0e3e36d0fd48fe2fb462bffb5dcb9a, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:11,688 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685228411688"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228411688"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228411688"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228411688"}]},"ts":"1685228411688"} 2023-05-27 23:00:11,689 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into 115c4e96c77d455eb2999c1bc6780edf(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:11,689 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:11,689 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=15, startTime=1685228411661; duration=0sec 2023-05-27 23:00:11,689 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:11,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-27 23:00:11,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure 6f0e3e36d0fd48fe2fb462bffb5dcb9a, server=jenkins-hbase4.apache.org,32987,1685228387703 in 204 msec 2023-05-27 23:00:11,693 INFO [RS:0;jenkins-hbase4:32987-longCompactions-0] throttle.PressureAwareThroughputController(145): 6f0e3e36d0fd48fe2fb462bffb5dcb9a#info#compaction#35 average throughput is 31.30 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:11,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-05-27 23:00:11,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=6f0e3e36d0fd48fe2fb462bffb5dcb9a, ASSIGN in 361 msec 2023-05-27 23:00:11,696 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=5284852e3c6fe0fc659026b96f907d72, daughterA=6f0e3e36d0fd48fe2fb462bffb5dcb9a, daughterB=6d400feb19af72560059bfd56c267738 in 696 msec 2023-05-27 23:00:11,704 DEBUG [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/.tmp/info/e5c5ad7cc0e742d0a8f1b58ca1c62407 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info/e5c5ad7cc0e742d0a8f1b58ca1c62407 2023-05-27 23:00:11,710 INFO [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 6f0e3e36d0fd48fe2fb462bffb5dcb9a/info of 6f0e3e36d0fd48fe2fb462bffb5dcb9a into e5c5ad7cc0e742d0a8f1b58ca1c62407(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:11,710 DEBUG [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6f0e3e36d0fd48fe2fb462bffb5dcb9a: 2023-05-27 23:00:11,710 INFO [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a., storeName=6f0e3e36d0fd48fe2fb462bffb5dcb9a/info, priority=15, startTime=1685228411684; duration=0sec 2023-05-27 23:00:11,710 DEBUG [RS:0;jenkins-hbase4:32987-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:12,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36240 deadline: 1685228422935, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685228388738.5284852e3c6fe0fc659026b96f907d72. is not online on jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:16,807 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 23:00:22,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:22,994 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:00:23,003 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=96 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/69bc3339d1374779bdd141240d2ed0b2 2023-05-27 23:00:23,009 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/69bc3339d1374779bdd141240d2ed0b2 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/69bc3339d1374779bdd141240d2ed0b2 2023-05-27 23:00:23,014 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/69bc3339d1374779bdd141240d2ed0b2, entries=7, sequenceid=96, filesize=12.1 K 2023-05-27 23:00:23,015 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 6d400feb19af72560059bfd56c267738 in 21ms, sequenceid=96, compaction requested=false 2023-05-27 23:00:23,015 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:23,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:23,016 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-27 23:00:23,024 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=118 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/edcab9e30a6d45e0a182510026ab5b9e 2023-05-27 23:00:23,029 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/edcab9e30a6d45e0a182510026ab5b9e as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/edcab9e30a6d45e0a182510026ab5b9e 2023-05-27 23:00:23,033 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/edcab9e30a6d45e0a182510026ab5b9e, entries=19, sequenceid=118, filesize=24.7 K 2023-05-27 23:00:23,034 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=6.30 KB/6456 for 6d400feb19af72560059bfd56c267738 in 18ms, sequenceid=118, compaction requested=true 2023-05-27 23:00:23,034 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:23,035 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:23,035 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:23,036 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 45892 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:23,036 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:00:23,036 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:23,036 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/115c4e96c77d455eb2999c1bc6780edf, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/69bc3339d1374779bdd141240d2ed0b2, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/edcab9e30a6d45e0a182510026ab5b9e] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=44.8 K 2023-05-27 23:00:23,036 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 115c4e96c77d455eb2999c1bc6780edf, keycount=3, bloomtype=ROW, size=8.0 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685228400832 2023-05-27 23:00:23,036 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 69bc3339d1374779bdd141240d2ed0b2, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=96, earliestPutTs=1685228422988 2023-05-27 23:00:23,037 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting edcab9e30a6d45e0a182510026ab5b9e, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685228422995 2023-05-27 23:00:23,045 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#38 average throughput is 29.76 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:23,056 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/3f00f5630fae4a7388e08f1ddbbad055 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/3f00f5630fae4a7388e08f1ddbbad055 2023-05-27 23:00:23,061 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into 3f00f5630fae4a7388e08f1ddbbad055(size=35.5 K), total size for store is 35.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:23,061 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:23,061 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228423034; duration=0sec 2023-05-27 23:00:23,061 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:25,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:25,024 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:00:25,033 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=129 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/462588f08158413e9e06e1b7a1ebcfe9 2023-05-27 23:00:25,040 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/462588f08158413e9e06e1b7a1ebcfe9 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/462588f08158413e9e06e1b7a1ebcfe9 2023-05-27 23:00:25,045 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/462588f08158413e9e06e1b7a1ebcfe9, entries=7, sequenceid=129, filesize=12.1 K 2023-05-27 23:00:25,046 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 6d400feb19af72560059bfd56c267738 in 22ms, sequenceid=129, compaction requested=false 2023-05-27 23:00:25,046 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:25,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:25,046 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-27 23:00:25,056 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=152 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/1d8d2ca3f2b84ad0a0ac83e65268cae3 2023-05-27 23:00:25,058 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 23:00:25,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36240 deadline: 1685228435058, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:25,060 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/1d8d2ca3f2b84ad0a0ac83e65268cae3 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d8d2ca3f2b84ad0a0ac83e65268cae3 2023-05-27 23:00:25,065 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d8d2ca3f2b84ad0a0ac83e65268cae3, entries=20, sequenceid=152, filesize=25.8 K 2023-05-27 23:00:25,066 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 6d400feb19af72560059bfd56c267738 in 20ms, sequenceid=152, compaction requested=true 2023-05-27 23:00:25,066 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:25,066 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:25,066 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:25,067 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 75156 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:25,067 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:00:25,067 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:25,067 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/3f00f5630fae4a7388e08f1ddbbad055, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/462588f08158413e9e06e1b7a1ebcfe9, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d8d2ca3f2b84ad0a0ac83e65268cae3] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=73.4 K 2023-05-27 23:00:25,068 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 3f00f5630fae4a7388e08f1ddbbad055, keycount=29, bloomtype=ROW, size=35.5 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685228400832 2023-05-27 23:00:25,068 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 462588f08158413e9e06e1b7a1ebcfe9, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=129, earliestPutTs=1685228423016 2023-05-27 23:00:25,068 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 1d8d2ca3f2b84ad0a0ac83e65268cae3, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=152, earliestPutTs=1685228425025 2023-05-27 23:00:25,078 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#41 average throughput is 57.46 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:25,093 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/dede37346a924958aae0ce0bcb5952ed as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/dede37346a924958aae0ce0bcb5952ed 2023-05-27 23:00:25,099 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into dede37346a924958aae0ce0bcb5952ed(size=64.1 K), total size for store is 64.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:25,099 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:25,100 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228425066; duration=0sec 2023-05-27 23:00:25,100 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:33,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=32, reuseRatio=71.11% 2023-05-27 23:00:33,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-27 23:00:35,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:35,064 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-27 23:00:35,072 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=166 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/82e5cf34e9e948f1bfc0b3b31b1c9591 2023-05-27 23:00:35,078 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/82e5cf34e9e948f1bfc0b3b31b1c9591 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/82e5cf34e9e948f1bfc0b3b31b1c9591 2023-05-27 23:00:35,082 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/82e5cf34e9e948f1bfc0b3b31b1c9591, entries=10, sequenceid=166, filesize=15.3 K 2023-05-27 23:00:35,083 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=1.05 KB/1076 for 6d400feb19af72560059bfd56c267738 in 19ms, sequenceid=166, compaction requested=false 2023-05-27 23:00:35,083 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:37,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:37,072 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:00:37,081 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=176 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/1d239c30d31843daadbd6089026e948b 2023-05-27 23:00:37,087 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/1d239c30d31843daadbd6089026e948b as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d239c30d31843daadbd6089026e948b 2023-05-27 23:00:37,092 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d239c30d31843daadbd6089026e948b, entries=7, sequenceid=176, filesize=12.1 K 2023-05-27 23:00:37,093 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 6d400feb19af72560059bfd56c267738 in 22ms, sequenceid=176, compaction requested=true 2023-05-27 23:00:37,093 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:37,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:37,093 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:37,093 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:37,094 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-27 23:00:37,095 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93636 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:37,095 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:00:37,095 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:37,095 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/dede37346a924958aae0ce0bcb5952ed, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/82e5cf34e9e948f1bfc0b3b31b1c9591, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d239c30d31843daadbd6089026e948b] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=91.4 K 2023-05-27 23:00:37,095 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting dede37346a924958aae0ce0bcb5952ed, keycount=56, bloomtype=ROW, size=64.1 K, encoding=NONE, compression=NONE, seqNum=152, earliestPutTs=1685228400832 2023-05-27 23:00:37,096 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 82e5cf34e9e948f1bfc0b3b31b1c9591, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=166, earliestPutTs=1685228425047 2023-05-27 23:00:37,096 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 1d239c30d31843daadbd6089026e948b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1685228435064 2023-05-27 23:00:37,106 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=199 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/b8df50057eff43f1a2a393c96f842468 2023-05-27 23:00:37,110 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#45 average throughput is 37.45 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:37,113 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/b8df50057eff43f1a2a393c96f842468 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/b8df50057eff43f1a2a393c96f842468 2023-05-27 23:00:37,117 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/b8df50057eff43f1a2a393c96f842468, entries=20, sequenceid=199, filesize=25.8 K 2023-05-27 23:00:37,118 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=6.30 KB/6456 for 6d400feb19af72560059bfd56c267738 in 24ms, sequenceid=199, compaction requested=false 2023-05-27 23:00:37,118 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:37,131 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/cbfaf57876d446b694e3512a1b7b8ac4 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cbfaf57876d446b694e3512a1b7b8ac4 2023-05-27 23:00:37,136 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into cbfaf57876d446b694e3512a1b7b8ac4(size=82.1 K), total size for store is 107.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:37,136 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:37,136 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228437093; duration=0sec 2023-05-27 23:00:37,137 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:39,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:39,102 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:00:39,112 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=210 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/454d781874164d52bed3d9a537a4920f 2023-05-27 23:00:39,118 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/454d781874164d52bed3d9a537a4920f as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/454d781874164d52bed3d9a537a4920f 2023-05-27 23:00:39,123 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/454d781874164d52bed3d9a537a4920f, entries=7, sequenceid=210, filesize=12.1 K 2023-05-27 23:00:39,124 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 6d400feb19af72560059bfd56c267738 in 22ms, sequenceid=210, compaction requested=true 2023-05-27 23:00:39,124 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:39,124 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:39,124 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:39,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:39,125 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-27 23:00:39,125 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 122937 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:39,126 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:00:39,126 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:39,126 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cbfaf57876d446b694e3512a1b7b8ac4, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/b8df50057eff43f1a2a393c96f842468, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/454d781874164d52bed3d9a537a4920f] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=120.1 K 2023-05-27 23:00:39,127 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting cbfaf57876d446b694e3512a1b7b8ac4, keycount=73, bloomtype=ROW, size=82.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1685228400832 2023-05-27 23:00:39,127 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting b8df50057eff43f1a2a393c96f842468, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=199, earliestPutTs=1685228437072 2023-05-27 23:00:39,128 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 454d781874164d52bed3d9a537a4920f, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=210, earliestPutTs=1685228437094 2023-05-27 23:00:39,138 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 23:00:39,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36240 deadline: 1685228449138, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:39,139 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=233 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/2554da7dd7884cfa8d9d2c0ca7639981 2023-05-27 23:00:39,143 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#48 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:39,144 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/2554da7dd7884cfa8d9d2c0ca7639981 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/2554da7dd7884cfa8d9d2c0ca7639981 2023-05-27 23:00:39,151 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/2554da7dd7884cfa8d9d2c0ca7639981, entries=20, sequenceid=233, filesize=25.8 K 2023-05-27 23:00:39,152 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for 6d400feb19af72560059bfd56c267738 in 27ms, sequenceid=233, compaction requested=false 2023-05-27 23:00:39,152 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:39,154 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/4740095e50e34d50b2d6b9661a70e5ec as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/4740095e50e34d50b2d6b9661a70e5ec 2023-05-27 23:00:39,159 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into 4740095e50e34d50b2d6b9661a70e5ec(size=110.7 K), total size for store is 136.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:39,159 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:39,159 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228439124; duration=0sec 2023-05-27 23:00:39,159 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:40,422 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-27 23:00:49,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:49,181 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-27 23:00:49,190 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=247 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/222e3204ea11416091bb47727cdd8038 2023-05-27 23:00:49,196 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/222e3204ea11416091bb47727cdd8038 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/222e3204ea11416091bb47727cdd8038 2023-05-27 23:00:49,200 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/222e3204ea11416091bb47727cdd8038, entries=10, sequenceid=247, filesize=15.3 K 2023-05-27 23:00:49,201 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=1.05 KB/1076 for 6d400feb19af72560059bfd56c267738 in 20ms, sequenceid=247, compaction requested=true 2023-05-27 23:00:49,201 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:49,201 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:49,201 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:00:49,202 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155387 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:00:49,202 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:00:49,202 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:00:49,202 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/4740095e50e34d50b2d6b9661a70e5ec, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/2554da7dd7884cfa8d9d2c0ca7639981, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/222e3204ea11416091bb47727cdd8038] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=151.7 K 2023-05-27 23:00:49,203 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 4740095e50e34d50b2d6b9661a70e5ec, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=210, earliestPutTs=1685228400832 2023-05-27 23:00:49,203 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 2554da7dd7884cfa8d9d2c0ca7639981, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=233, earliestPutTs=1685228439103 2023-05-27 23:00:49,203 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 222e3204ea11416091bb47727cdd8038, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=247, earliestPutTs=1685228439125 2023-05-27 23:00:49,213 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#50 average throughput is 66.70 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:00:49,225 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/9b296833e64943ceb7dc995a425a1c25 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/9b296833e64943ceb7dc995a425a1c25 2023-05-27 23:00:49,230 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into 9b296833e64943ceb7dc995a425a1c25(size=142.5 K), total size for store is 142.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:00:49,230 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:00:49,230 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228449201; duration=0sec 2023-05-27 23:00:49,230 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:00:51,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:00:51,189 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:00:51,212 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=258 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/cfa6215133ef471c86a5dc2bb404aab3 2023-05-27 23:00:51,213 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-27 23:00:51,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] ipc.CallRunner(144): callId: 239 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:36240 deadline: 1685228461213, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=6d400feb19af72560059bfd56c267738, server=jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:00:51,218 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/cfa6215133ef471c86a5dc2bb404aab3 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cfa6215133ef471c86a5dc2bb404aab3 2023-05-27 23:00:51,223 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cfa6215133ef471c86a5dc2bb404aab3, entries=7, sequenceid=258, filesize=12.1 K 2023-05-27 23:00:51,224 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for 6d400feb19af72560059bfd56c267738 in 35ms, sequenceid=258, compaction requested=false 2023-05-27 23:00:51,224 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:01,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:01:01,296 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-27 23:01:01,308 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=284 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/86b9722a1e084a45a177ab01aef40161 2023-05-27 23:01:01,313 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/86b9722a1e084a45a177ab01aef40161 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86b9722a1e084a45a177ab01aef40161 2023-05-27 23:01:01,319 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86b9722a1e084a45a177ab01aef40161, entries=23, sequenceid=284, filesize=29.0 K 2023-05-27 23:01:01,320 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=3.15 KB/3228 for 6d400feb19af72560059bfd56c267738 in 25ms, sequenceid=284, compaction requested=true 2023-05-27 23:01:01,320 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:01,320 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:01:01,320 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:01:01,321 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 188059 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:01:01,322 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:01:01,322 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:01,322 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/9b296833e64943ceb7dc995a425a1c25, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cfa6215133ef471c86a5dc2bb404aab3, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86b9722a1e084a45a177ab01aef40161] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=183.7 K 2023-05-27 23:01:01,322 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 9b296833e64943ceb7dc995a425a1c25, keycount=130, bloomtype=ROW, size=142.5 K, encoding=NONE, compression=NONE, seqNum=247, earliestPutTs=1685228400832 2023-05-27 23:01:01,323 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting cfa6215133ef471c86a5dc2bb404aab3, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=258, earliestPutTs=1685228449181 2023-05-27 23:01:01,323 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 86b9722a1e084a45a177ab01aef40161, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=284, earliestPutTs=1685228451189 2023-05-27 23:01:01,334 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#53 average throughput is 82.09 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:01:01,349 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/f05d13ea1df3413d9e48c8095750db67 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/f05d13ea1df3413d9e48c8095750db67 2023-05-27 23:01:01,354 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into f05d13ea1df3413d9e48c8095750db67(size=174.2 K), total size for store is 174.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:01:01,354 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:01,354 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228461320; duration=0sec 2023-05-27 23:01:01,354 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:01:03,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:01:03,307 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-27 23:01:03,316 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/95209dd4fbca4586bc25cc67319cf696 2023-05-27 23:01:03,322 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/95209dd4fbca4586bc25cc67319cf696 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/95209dd4fbca4586bc25cc67319cf696 2023-05-27 23:01:03,327 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/95209dd4fbca4586bc25cc67319cf696, entries=7, sequenceid=295, filesize=12.1 K 2023-05-27 23:01:03,328 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 6d400feb19af72560059bfd56c267738 in 21ms, sequenceid=295, compaction requested=false 2023-05-27 23:01:03,328 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:03,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32987] regionserver.HRegion(9158): Flush requested on 6d400feb19af72560059bfd56c267738 2023-05-27 23:01:03,329 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-27 23:01:03,341 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=318 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/7d74291a6bb843d1b1840f08e68abb41 2023-05-27 23:01:03,348 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/7d74291a6bb843d1b1840f08e68abb41 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/7d74291a6bb843d1b1840f08e68abb41 2023-05-27 23:01:03,353 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/7d74291a6bb843d1b1840f08e68abb41, entries=20, sequenceid=318, filesize=25.8 K 2023-05-27 23:01:03,354 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=8.41 KB/8608 for 6d400feb19af72560059bfd56c267738 in 25ms, sequenceid=318, compaction requested=true 2023-05-27 23:01:03,354 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:03,354 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:01:03,354 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-27 23:01:03,355 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 217302 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-27 23:01:03,355 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1912): 6d400feb19af72560059bfd56c267738/info is initiating minor compaction (all files) 2023-05-27 23:01:03,355 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 6d400feb19af72560059bfd56c267738/info in TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:03,355 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/f05d13ea1df3413d9e48c8095750db67, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/95209dd4fbca4586bc25cc67319cf696, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/7d74291a6bb843d1b1840f08e68abb41] into tmpdir=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp, totalSize=212.2 K 2023-05-27 23:01:03,356 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting f05d13ea1df3413d9e48c8095750db67, keycount=160, bloomtype=ROW, size=174.2 K, encoding=NONE, compression=NONE, seqNum=284, earliestPutTs=1685228400832 2023-05-27 23:01:03,356 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 95209dd4fbca4586bc25cc67319cf696, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1685228461296 2023-05-27 23:01:03,356 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] compactions.Compactor(207): Compacting 7d74291a6bb843d1b1840f08e68abb41, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=318, earliestPutTs=1685228463307 2023-05-27 23:01:03,367 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] throttle.PressureAwareThroughputController(145): 6d400feb19af72560059bfd56c267738#info#compaction#56 average throughput is 95.95 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-27 23:01:03,384 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/56be6be8af6a474cbcc42c895a2407c3 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/56be6be8af6a474cbcc42c895a2407c3 2023-05-27 23:01:03,389 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 6d400feb19af72560059bfd56c267738/info of 6d400feb19af72560059bfd56c267738 into 56be6be8af6a474cbcc42c895a2407c3(size=202.9 K), total size for store is 202.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-27 23:01:03,390 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:03,390 INFO [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738., storeName=6d400feb19af72560059bfd56c267738/info, priority=13, startTime=1685228463354; duration=0sec 2023-05-27 23:01:03,390 DEBUG [RS:0;jenkins-hbase4:32987-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-27 23:01:05,339 INFO [Listener at localhost/34663] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-27 23:01:05,354 INFO [Listener at localhost/34663] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228388081 with entries=312, filesize=307.75 KB; new WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228465339 2023-05-27 23:01:05,354 DEBUG [Listener at localhost/34663] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36449,DS-6c04fc1d-591a-4746-931f-11b32c4d6b59,DISK], DatanodeInfoWithStorage[127.0.0.1:44837,DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d,DISK]] 2023-05-27 23:01:05,354 DEBUG [Listener at localhost/34663] wal.AbstractFSWAL(716): hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228388081 is not closed yet, will try archiving it next time 2023-05-27 23:01:05,360 INFO [Listener at localhost/34663] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-27 23:01:05,367 INFO [Listener at localhost/34663] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/.tmp/info/ce738501927a4514b13a47d656f52ef0 2023-05-27 23:01:05,372 DEBUG [Listener at localhost/34663] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/.tmp/info/ce738501927a4514b13a47d656f52ef0 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info/ce738501927a4514b13a47d656f52ef0 2023-05-27 23:01:05,377 INFO [Listener at localhost/34663] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/info/ce738501927a4514b13a47d656f52ef0, entries=16, sequenceid=24, filesize=7.0 K 2023-05-27 23:01:05,377 INFO [Listener at localhost/34663] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 17ms, sequenceid=24, compaction requested=false 2023-05-27 23:01:05,378 DEBUG [Listener at localhost/34663] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-27 23:01:05,378 INFO [Listener at localhost/34663] regionserver.HRegion(2745): Flushing e3a303c7ef932c4f2db8ca76b3c5e69f 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 23:01:05,389 INFO [Listener at localhost/34663] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/.tmp/info/45653a1119654d10a8c01cf13d7df68f 2023-05-27 23:01:05,394 DEBUG [Listener at localhost/34663] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/.tmp/info/45653a1119654d10a8c01cf13d7df68f as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/info/45653a1119654d10a8c01cf13d7df68f 2023-05-27 23:01:05,398 INFO [Listener at localhost/34663] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/info/45653a1119654d10a8c01cf13d7df68f, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 23:01:05,399 INFO [Listener at localhost/34663] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for e3a303c7ef932c4f2db8ca76b3c5e69f in 21ms, sequenceid=6, compaction requested=false 2023-05-27 23:01:05,400 DEBUG [Listener at localhost/34663] regionserver.HRegion(2446): Flush status journal for e3a303c7ef932c4f2db8ca76b3c5e69f: 2023-05-27 23:01:05,400 DEBUG [Listener at localhost/34663] regionserver.HRegion(2446): Flush status journal for 6f0e3e36d0fd48fe2fb462bffb5dcb9a: 2023-05-27 23:01:05,400 INFO [Listener at localhost/34663] regionserver.HRegion(2745): Flushing 6d400feb19af72560059bfd56c267738 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-27 23:01:05,407 INFO [Listener at localhost/34663] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=330 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/d6d25dae99bb46c4aea8cc982dc1f54e 2023-05-27 23:01:05,411 DEBUG [Listener at localhost/34663] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/.tmp/info/d6d25dae99bb46c4aea8cc982dc1f54e as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/d6d25dae99bb46c4aea8cc982dc1f54e 2023-05-27 23:01:05,415 INFO [Listener at localhost/34663] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/d6d25dae99bb46c4aea8cc982dc1f54e, entries=8, sequenceid=330, filesize=13.2 K 2023-05-27 23:01:05,416 INFO [Listener at localhost/34663] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=0 B/0 for 6d400feb19af72560059bfd56c267738 in 16ms, sequenceid=330, compaction requested=false 2023-05-27 23:01:05,416 DEBUG [Listener at localhost/34663] regionserver.HRegion(2446): Flush status journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:05,422 INFO [Listener at localhost/34663] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228465339 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228465416 2023-05-27 23:01:05,423 DEBUG [Listener at localhost/34663] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36449,DS-6c04fc1d-591a-4746-931f-11b32c4d6b59,DISK], DatanodeInfoWithStorage[127.0.0.1:44837,DS-b245d3df-36ad-4c64-bf41-b98c6ccc406d,DISK]] 2023-05-27 23:01:05,423 DEBUG [Listener at localhost/34663] wal.AbstractFSWAL(716): hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228465339 is not closed yet, will try archiving it next time 2023-05-27 23:01:05,423 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228388081 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/oldWALs/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228388081 2023-05-27 23:01:05,424 INFO [Listener at localhost/34663] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-27 23:01:05,425 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228465339 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/oldWALs/jenkins-hbase4.apache.org%2C32987%2C1685228387703.1685228465339 2023-05-27 23:01:05,524 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 23:01:05,524 INFO [Listener at localhost/34663] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-27 23:01:05,524 DEBUG [Listener at localhost/34663] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5877aab8 to 127.0.0.1:54987 2023-05-27 23:01:05,524 DEBUG [Listener at localhost/34663] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:05,524 DEBUG [Listener at localhost/34663] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 23:01:05,524 DEBUG [Listener at localhost/34663] util.JVMClusterUtil(257): Found active master hash=354781437, stopped=false 2023-05-27 23:01:05,525 INFO [Listener at localhost/34663] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 23:01:05,527 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 23:01:05,527 INFO [Listener at localhost/34663] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 23:01:05,527 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 23:01:05,527 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:05,527 DEBUG [Listener at localhost/34663] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x794ce8fe to 127.0.0.1:54987 2023-05-27 23:01:05,528 DEBUG [Listener at localhost/34663] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:05,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 23:01:05,528 INFO [Listener at localhost/34663] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,32987,1685228387703' ***** 2023-05-27 23:01:05,528 INFO [Listener at localhost/34663] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 23:01:05,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 23:01:05,528 INFO [RS:0;jenkins-hbase4:32987] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 23:01:05,528 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 23:01:05,528 INFO [RS:0;jenkins-hbase4:32987] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 23:01:05,528 INFO [RS:0;jenkins-hbase4:32987] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(3303): Received CLOSE for e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(3303): Received CLOSE for 6f0e3e36d0fd48fe2fb462bffb5dcb9a 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(3303): Received CLOSE for 6d400feb19af72560059bfd56c267738 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:01:05,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e3a303c7ef932c4f2db8ca76b3c5e69f, disabling compactions & flushes 2023-05-27 23:01:05,529 DEBUG [RS:0;jenkins-hbase4:32987] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x08644503 to 127.0.0.1:54987 2023-05-27 23:01:05,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 23:01:05,529 DEBUG [RS:0;jenkins-hbase4:32987] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:05,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 23:01:05,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. after waiting 0 ms 2023-05-27 23:01:05,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 23:01:05,529 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-27 23:01:05,529 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, e3a303c7ef932c4f2db8ca76b3c5e69f=hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f., 6f0e3e36d0fd48fe2fb462bffb5dcb9a=TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a., 6d400feb19af72560059bfd56c267738=TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.} 2023-05-27 23:01:05,529 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 23:01:05,529 DEBUG [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1504): Waiting on 1588230740, 6d400feb19af72560059bfd56c267738, 6f0e3e36d0fd48fe2fb462bffb5dcb9a, e3a303c7ef932c4f2db8ca76b3c5e69f 2023-05-27 23:01:05,530 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 23:01:05,531 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 23:01:05,531 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 23:01:05,531 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 23:01:05,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/namespace/e3a303c7ef932c4f2db8ca76b3c5e69f/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 23:01:05,538 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-27 23:01:05,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 23:01:05,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e3a303c7ef932c4f2db8ca76b3c5e69f: 2023-05-27 23:01:05,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685228388233.e3a303c7ef932c4f2db8ca76b3c5e69f. 2023-05-27 23:01:05,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6f0e3e36d0fd48fe2fb462bffb5dcb9a, disabling compactions & flushes 2023-05-27 23:01:05,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:01:05,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:01:05,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. after waiting 0 ms 2023-05-27 23:01:05,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:01:05,540 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 23:01:05,540 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72->hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc-bottom] to archive 2023-05-27 23:01:05,540 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 23:01:05,540 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 23:01:05,541 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 23:01:05,541 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 23:01:05,543 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:01:05,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6f0e3e36d0fd48fe2fb462bffb5dcb9a/recovered.edits/90.seqid, newMaxSeqId=90, maxSeqId=85 2023-05-27 23:01:05,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:01:05,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6f0e3e36d0fd48fe2fb462bffb5dcb9a: 2023-05-27 23:01:05,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685228410998.6f0e3e36d0fd48fe2fb462bffb5dcb9a. 2023-05-27 23:01:05,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d400feb19af72560059bfd56c267738, disabling compactions & flushes 2023-05-27 23:01:05,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:05,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:05,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. after waiting 0 ms 2023-05-27 23:01:05,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:05,563 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72->hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/5284852e3c6fe0fc659026b96f907d72/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc-top, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/115c4e96c77d455eb2999c1bc6780edf, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/69bc3339d1374779bdd141240d2ed0b2, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/3f00f5630fae4a7388e08f1ddbbad055, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/edcab9e30a6d45e0a182510026ab5b9e, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/462588f08158413e9e06e1b7a1ebcfe9, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/dede37346a924958aae0ce0bcb5952ed, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d8d2ca3f2b84ad0a0ac83e65268cae3, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/82e5cf34e9e948f1bfc0b3b31b1c9591, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cbfaf57876d446b694e3512a1b7b8ac4, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d239c30d31843daadbd6089026e948b, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/b8df50057eff43f1a2a393c96f842468, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/4740095e50e34d50b2d6b9661a70e5ec, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/454d781874164d52bed3d9a537a4920f, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/2554da7dd7884cfa8d9d2c0ca7639981, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/9b296833e64943ceb7dc995a425a1c25, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/222e3204ea11416091bb47727cdd8038, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cfa6215133ef471c86a5dc2bb404aab3, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/f05d13ea1df3413d9e48c8095750db67, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86b9722a1e084a45a177ab01aef40161, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/95209dd4fbca4586bc25cc67319cf696, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/7d74291a6bb843d1b1840f08e68abb41] to archive 2023-05-27 23:01:05,563 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-27 23:01:05,565 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86bf1c5f6ddc4f9f9b7b32bd5ee30adc.5284852e3c6fe0fc659026b96f907d72 2023-05-27 23:01:05,566 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/115c4e96c77d455eb2999c1bc6780edf to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/115c4e96c77d455eb2999c1bc6780edf 2023-05-27 23:01:05,567 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/69bc3339d1374779bdd141240d2ed0b2 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/69bc3339d1374779bdd141240d2ed0b2 2023-05-27 23:01:05,568 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/3f00f5630fae4a7388e08f1ddbbad055 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/3f00f5630fae4a7388e08f1ddbbad055 2023-05-27 23:01:05,569 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/edcab9e30a6d45e0a182510026ab5b9e to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/edcab9e30a6d45e0a182510026ab5b9e 2023-05-27 23:01:05,570 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/462588f08158413e9e06e1b7a1ebcfe9 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/462588f08158413e9e06e1b7a1ebcfe9 2023-05-27 23:01:05,571 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/dede37346a924958aae0ce0bcb5952ed to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/dede37346a924958aae0ce0bcb5952ed 2023-05-27 23:01:05,572 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d8d2ca3f2b84ad0a0ac83e65268cae3 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d8d2ca3f2b84ad0a0ac83e65268cae3 2023-05-27 23:01:05,573 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/82e5cf34e9e948f1bfc0b3b31b1c9591 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/82e5cf34e9e948f1bfc0b3b31b1c9591 2023-05-27 23:01:05,574 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cbfaf57876d446b694e3512a1b7b8ac4 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cbfaf57876d446b694e3512a1b7b8ac4 2023-05-27 23:01:05,575 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d239c30d31843daadbd6089026e948b to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/1d239c30d31843daadbd6089026e948b 2023-05-27 23:01:05,576 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/b8df50057eff43f1a2a393c96f842468 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/b8df50057eff43f1a2a393c96f842468 2023-05-27 23:01:05,577 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/4740095e50e34d50b2d6b9661a70e5ec to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/4740095e50e34d50b2d6b9661a70e5ec 2023-05-27 23:01:05,578 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/454d781874164d52bed3d9a537a4920f to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/454d781874164d52bed3d9a537a4920f 2023-05-27 23:01:05,579 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/2554da7dd7884cfa8d9d2c0ca7639981 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/2554da7dd7884cfa8d9d2c0ca7639981 2023-05-27 23:01:05,580 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/9b296833e64943ceb7dc995a425a1c25 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/9b296833e64943ceb7dc995a425a1c25 2023-05-27 23:01:05,581 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/222e3204ea11416091bb47727cdd8038 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/222e3204ea11416091bb47727cdd8038 2023-05-27 23:01:05,582 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cfa6215133ef471c86a5dc2bb404aab3 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/cfa6215133ef471c86a5dc2bb404aab3 2023-05-27 23:01:05,583 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/f05d13ea1df3413d9e48c8095750db67 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/f05d13ea1df3413d9e48c8095750db67 2023-05-27 23:01:05,584 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86b9722a1e084a45a177ab01aef40161 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/86b9722a1e084a45a177ab01aef40161 2023-05-27 23:01:05,585 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/95209dd4fbca4586bc25cc67319cf696 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/95209dd4fbca4586bc25cc67319cf696 2023-05-27 23:01:05,586 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/7d74291a6bb843d1b1840f08e68abb41 to hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/archive/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/info/7d74291a6bb843d1b1840f08e68abb41 2023-05-27 23:01:05,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/data/default/TestLogRolling-testLogRolling/6d400feb19af72560059bfd56c267738/recovered.edits/333.seqid, newMaxSeqId=333, maxSeqId=85 2023-05-27 23:01:05,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:05,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d400feb19af72560059bfd56c267738: 2023-05-27 23:01:05,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685228410998.6d400feb19af72560059bfd56c267738. 2023-05-27 23:01:05,731 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32987,1685228387703; all regions closed. 2023-05-27 23:01:05,731 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:01:05,736 DEBUG [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/oldWALs 2023-05-27 23:01:05,736 INFO [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C32987%2C1685228387703.meta:.meta(num 1685228388183) 2023-05-27 23:01:05,737 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/WALs/jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:01:05,742 DEBUG [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/oldWALs 2023-05-27 23:01:05,742 INFO [RS:0;jenkins-hbase4:32987] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C32987%2C1685228387703:(num 1685228465416) 2023-05-27 23:01:05,742 DEBUG [RS:0;jenkins-hbase4:32987] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:05,742 INFO [RS:0;jenkins-hbase4:32987] regionserver.LeaseManager(133): Closed leases 2023-05-27 23:01:05,742 INFO [RS:0;jenkins-hbase4:32987] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 23:01:05,742 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 23:01:05,743 INFO [RS:0;jenkins-hbase4:32987] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32987 2023-05-27 23:01:05,745 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32987,1685228387703 2023-05-27 23:01:05,745 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 23:01:05,745 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 23:01:05,746 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32987,1685228387703] 2023-05-27 23:01:05,746 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32987,1685228387703; numProcessing=1 2023-05-27 23:01:05,748 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32987,1685228387703 already deleted, retry=false 2023-05-27 23:01:05,748 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32987,1685228387703 expired; onlineServers=0 2023-05-27 23:01:05,748 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44157,1685228387663' ***** 2023-05-27 23:01:05,748 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 23:01:05,748 DEBUG [M:0;jenkins-hbase4:44157] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@111a1e81, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 23:01:05,748 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 23:01:05,748 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44157,1685228387663; all regions closed. 2023-05-27 23:01:05,748 DEBUG [M:0;jenkins-hbase4:44157] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:05,749 DEBUG [M:0;jenkins-hbase4:44157] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 23:01:05,749 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 23:01:05,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228387833] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228387833,5,FailOnTimeoutGroup] 2023-05-27 23:01:05,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228387832] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228387832,5,FailOnTimeoutGroup] 2023-05-27 23:01:05,749 DEBUG [M:0;jenkins-hbase4:44157] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 23:01:05,750 INFO [M:0;jenkins-hbase4:44157] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 23:01:05,750 INFO [M:0;jenkins-hbase4:44157] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 23:01:05,750 INFO [M:0;jenkins-hbase4:44157] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 23:01:05,750 DEBUG [M:0;jenkins-hbase4:44157] master.HMaster(1512): Stopping service threads 2023-05-27 23:01:05,750 INFO [M:0;jenkins-hbase4:44157] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 23:01:05,751 ERROR [M:0;jenkins-hbase4:44157] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-27 23:01:05,751 INFO [M:0;jenkins-hbase4:44157] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 23:01:05,751 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 23:01:05,751 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 23:01:05,751 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:05,751 DEBUG [M:0;jenkins-hbase4:44157] zookeeper.ZKUtil(398): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 23:01:05,751 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 23:01:05,751 WARN [M:0;jenkins-hbase4:44157] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 23:01:05,751 INFO [M:0;jenkins-hbase4:44157] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 23:01:05,752 INFO [M:0;jenkins-hbase4:44157] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 23:01:05,752 DEBUG [M:0;jenkins-hbase4:44157] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 23:01:05,752 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:05,752 DEBUG [M:0;jenkins-hbase4:44157] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:05,752 DEBUG [M:0;jenkins-hbase4:44157] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 23:01:05,752 DEBUG [M:0;jenkins-hbase4:44157] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:05,752 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.70 KB heapSize=78.42 KB 2023-05-27 23:01:05,761 INFO [M:0;jenkins-hbase4:44157] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.70 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/958ee9efc7d047c3b2391d80d70af965 2023-05-27 23:01:05,765 INFO [M:0;jenkins-hbase4:44157] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 958ee9efc7d047c3b2391d80d70af965 2023-05-27 23:01:05,767 DEBUG [M:0;jenkins-hbase4:44157] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/958ee9efc7d047c3b2391d80d70af965 as hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/958ee9efc7d047c3b2391d80d70af965 2023-05-27 23:01:05,771 INFO [M:0;jenkins-hbase4:44157] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 958ee9efc7d047c3b2391d80d70af965 2023-05-27 23:01:05,771 INFO [M:0;jenkins-hbase4:44157] regionserver.HStore(1080): Added hdfs://localhost:33271/user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/958ee9efc7d047c3b2391d80d70af965, entries=18, sequenceid=160, filesize=6.9 K 2023-05-27 23:01:05,772 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegion(2948): Finished flush of dataSize ~64.70 KB/66256, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=160, compaction requested=false 2023-05-27 23:01:05,773 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:05,773 DEBUG [M:0;jenkins-hbase4:44157] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 23:01:05,773 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6c043de4-6d9d-5ed6-c5db-897199b44147/MasterData/WALs/jenkins-hbase4.apache.org,44157,1685228387663 2023-05-27 23:01:05,777 INFO [M:0;jenkins-hbase4:44157] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 23:01:05,777 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 23:01:05,778 INFO [M:0;jenkins-hbase4:44157] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44157 2023-05-27 23:01:05,780 DEBUG [M:0;jenkins-hbase4:44157] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44157,1685228387663 already deleted, retry=false 2023-05-27 23:01:05,847 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:05,847 INFO [RS:0;jenkins-hbase4:32987] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32987,1685228387703; zookeeper connection closed. 2023-05-27 23:01:05,847 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): regionserver:32987-0x1006edf00400001, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:05,848 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@171d4a16] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@171d4a16 2023-05-27 23:01:05,848 INFO [Listener at localhost/34663] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 23:01:05,948 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:05,948 INFO [M:0;jenkins-hbase4:44157] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44157,1685228387663; zookeeper connection closed. 2023-05-27 23:01:05,948 DEBUG [Listener at localhost/34663-EventThread] zookeeper.ZKWatcher(600): master:44157-0x1006edf00400000, quorum=127.0.0.1:54987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:05,949 WARN [Listener at localhost/34663] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 23:01:05,954 INFO [Listener at localhost/34663] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 23:01:05,960 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-27 23:01:06,059 WARN [BP-1631018543-172.31.14.131-1685228387104 heartbeating to localhost/127.0.0.1:33271] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 23:01:06,059 WARN [BP-1631018543-172.31.14.131-1685228387104 heartbeating to localhost/127.0.0.1:33271] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1631018543-172.31.14.131-1685228387104 (Datanode Uuid 34496a20-6b34-46f1-b265-34bac389eda9) service to localhost/127.0.0.1:33271 2023-05-27 23:01:06,060 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/dfs/data/data3/current/BP-1631018543-172.31.14.131-1685228387104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:06,061 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/dfs/data/data4/current/BP-1631018543-172.31.14.131-1685228387104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:06,062 WARN [Listener at localhost/34663] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 23:01:06,065 INFO [Listener at localhost/34663] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 23:01:06,169 WARN [BP-1631018543-172.31.14.131-1685228387104 heartbeating to localhost/127.0.0.1:33271] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 23:01:06,170 WARN [BP-1631018543-172.31.14.131-1685228387104 heartbeating to localhost/127.0.0.1:33271] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1631018543-172.31.14.131-1685228387104 (Datanode Uuid 126cb2fd-6373-4225-94c5-7c97f19760dd) service to localhost/127.0.0.1:33271 2023-05-27 23:01:06,170 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/dfs/data/data1/current/BP-1631018543-172.31.14.131-1685228387104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:06,171 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/cluster_c9b978b4-6cfa-52f0-e61d-c74f8da7f2b8/dfs/data/data2/current/BP-1631018543-172.31.14.131-1685228387104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:06,183 INFO [Listener at localhost/34663] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 23:01:06,299 INFO [Listener at localhost/34663] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 23:01:06,332 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 23:01:06,343 INFO [Listener at localhost/34663] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=107 (was 96) - Thread LEAK? -, OpenFileDescriptor=537 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=41 (was 56), ProcessCount=170 (was 172), AvailableMemoryMB=3284 (was 3523) 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=107, OpenFileDescriptor=537, MaxFileDescriptor=60000, SystemLoadAverage=41, ProcessCount=170, AvailableMemoryMB=3284 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/hadoop.log.dir so I do NOT create it in target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/ddf053d3-b4d2-2483-bc4c-16877d6341e9/hadoop.tmp.dir so I do NOT create it in target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9, deleteOnExit=true 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-27 23:01:06,352 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/test.cache.data in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/hadoop.tmp.dir in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/hadoop.log.dir in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-27 23:01:06,353 DEBUG [Listener at localhost/34663] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-27 23:01:06,353 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-27 23:01:06,354 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/nfs.dump.dir in system properties and HBase conf 2023-05-27 23:01:06,355 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/java.io.tmpdir in system properties and HBase conf 2023-05-27 23:01:06,355 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-27 23:01:06,355 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-27 23:01:06,355 INFO [Listener at localhost/34663] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-27 23:01:06,357 WARN [Listener at localhost/34663] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 23:01:06,360 WARN [Listener at localhost/34663] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 23:01:06,360 WARN [Listener at localhost/34663] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 23:01:06,396 WARN [Listener at localhost/34663] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 23:01:06,398 INFO [Listener at localhost/34663] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 23:01:06,402 INFO [Listener at localhost/34663] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/java.io.tmpdir/Jetty_localhost_39649_hdfs____ygu56n/webapp 2023-05-27 23:01:06,492 INFO [Listener at localhost/34663] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39649 2023-05-27 23:01:06,494 WARN [Listener at localhost/34663] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-27 23:01:06,496 WARN [Listener at localhost/34663] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-27 23:01:06,496 WARN [Listener at localhost/34663] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-27 23:01:06,539 WARN [Listener at localhost/42015] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 23:01:06,553 WARN [Listener at localhost/42015] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 23:01:06,555 WARN [Listener at localhost/42015] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 23:01:06,556 INFO [Listener at localhost/42015] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 23:01:06,560 INFO [Listener at localhost/42015] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/java.io.tmpdir/Jetty_localhost_36383_datanode____4hb15e/webapp 2023-05-27 23:01:06,650 INFO [Listener at localhost/42015] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36383 2023-05-27 23:01:06,656 WARN [Listener at localhost/37595] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 23:01:06,667 WARN [Listener at localhost/37595] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-27 23:01:06,669 WARN [Listener at localhost/37595] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-27 23:01:06,670 INFO [Listener at localhost/37595] log.Slf4jLog(67): jetty-6.1.26 2023-05-27 23:01:06,673 INFO [Listener at localhost/37595] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/java.io.tmpdir/Jetty_localhost_41803_datanode____.nhv4hn/webapp 2023-05-27 23:01:06,749 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x61bc317e9e635460: Processing first storage report for DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e from datanode 066e4359-1f32-40a1-b99d-40282790b567 2023-05-27 23:01:06,749 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x61bc317e9e635460: from storage DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e node DatanodeRegistration(127.0.0.1:34995, datanodeUuid=066e4359-1f32-40a1-b99d-40282790b567, infoPort=44479, infoSecurePort=0, ipcPort=37595, storageInfo=lv=-57;cid=testClusterID;nsid=1335733182;c=1685228466363), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 23:01:06,749 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x61bc317e9e635460: Processing first storage report for DS-6a2e224c-e5a1-45be-8287-82d69c4faaf9 from datanode 066e4359-1f32-40a1-b99d-40282790b567 2023-05-27 23:01:06,749 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x61bc317e9e635460: from storage DS-6a2e224c-e5a1-45be-8287-82d69c4faaf9 node DatanodeRegistration(127.0.0.1:34995, datanodeUuid=066e4359-1f32-40a1-b99d-40282790b567, infoPort=44479, infoSecurePort=0, ipcPort=37595, storageInfo=lv=-57;cid=testClusterID;nsid=1335733182;c=1685228466363), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 23:01:06,769 INFO [Listener at localhost/37595] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41803 2023-05-27 23:01:06,774 WARN [Listener at localhost/34157] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-27 23:01:06,868 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2e809a02b995f7d: Processing first storage report for DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829 from datanode 7a66eb69-207f-4a73-a144-9f10f6743c62 2023-05-27 23:01:06,868 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2e809a02b995f7d: from storage DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829 node DatanodeRegistration(127.0.0.1:45425, datanodeUuid=7a66eb69-207f-4a73-a144-9f10f6743c62, infoPort=45909, infoSecurePort=0, ipcPort=34157, storageInfo=lv=-57;cid=testClusterID;nsid=1335733182;c=1685228466363), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 23:01:06,868 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2e809a02b995f7d: Processing first storage report for DS-a0cac2f7-5049-45b4-9366-807570dfef34 from datanode 7a66eb69-207f-4a73-a144-9f10f6743c62 2023-05-27 23:01:06,868 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2e809a02b995f7d: from storage DS-a0cac2f7-5049-45b4-9366-807570dfef34 node DatanodeRegistration(127.0.0.1:45425, datanodeUuid=7a66eb69-207f-4a73-a144-9f10f6743c62, infoPort=45909, infoSecurePort=0, ipcPort=34157, storageInfo=lv=-57;cid=testClusterID;nsid=1335733182;c=1685228466363), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-27 23:01:06,882 DEBUG [Listener at localhost/34157] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27 2023-05-27 23:01:06,883 INFO [Listener at localhost/34157] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/zookeeper_0, clientPort=49517, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-27 23:01:06,884 INFO [Listener at localhost/34157] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49517 2023-05-27 23:01:06,884 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:06,885 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:06,897 INFO [Listener at localhost/34157] util.FSUtils(471): Created version file at hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4 with version=8 2023-05-27 23:01:06,897 INFO [Listener at localhost/34157] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43791/user/jenkins/test-data/4b10a117-d4ba-f37b-d286-f0c952159510/hbase-staging 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 23:01:06,899 INFO [Listener at localhost/34157] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 23:01:06,900 INFO [Listener at localhost/34157] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37591 2023-05-27 23:01:06,901 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:06,902 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:06,902 INFO [Listener at localhost/34157] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37591 connecting to ZooKeeper ensemble=127.0.0.1:49517 2023-05-27 23:01:06,910 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:375910x0, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 23:01:06,911 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37591-0x1006ee035c60000 connected 2023-05-27 23:01:06,924 DEBUG [Listener at localhost/34157] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 23:01:06,924 DEBUG [Listener at localhost/34157] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 23:01:06,924 DEBUG [Listener at localhost/34157] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 23:01:06,925 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37591 2023-05-27 23:01:06,925 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37591 2023-05-27 23:01:06,925 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37591 2023-05-27 23:01:06,925 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37591 2023-05-27 23:01:06,925 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37591 2023-05-27 23:01:06,925 INFO [Listener at localhost/34157] master.HMaster(444): hbase.rootdir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4, hbase.cluster.distributed=false 2023-05-27 23:01:06,937 INFO [Listener at localhost/34157] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-27 23:01:06,938 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 23:01:06,938 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-27 23:01:06,938 INFO [Listener at localhost/34157] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-27 23:01:06,938 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-27 23:01:06,938 INFO [Listener at localhost/34157] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-27 23:01:06,938 INFO [Listener at localhost/34157] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-27 23:01:06,939 INFO [Listener at localhost/34157] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44629 2023-05-27 23:01:06,940 INFO [Listener at localhost/34157] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-27 23:01:06,941 DEBUG [Listener at localhost/34157] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-27 23:01:06,941 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:06,942 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:06,943 INFO [Listener at localhost/34157] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44629 connecting to ZooKeeper ensemble=127.0.0.1:49517 2023-05-27 23:01:06,945 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:446290x0, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-27 23:01:06,946 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44629-0x1006ee035c60001 connected 2023-05-27 23:01:06,946 DEBUG [Listener at localhost/34157] zookeeper.ZKUtil(164): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 23:01:06,947 DEBUG [Listener at localhost/34157] zookeeper.ZKUtil(164): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 23:01:06,947 DEBUG [Listener at localhost/34157] zookeeper.ZKUtil(164): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-27 23:01:06,947 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44629 2023-05-27 23:01:06,948 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44629 2023-05-27 23:01:06,948 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44629 2023-05-27 23:01:06,948 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44629 2023-05-27 23:01:06,948 DEBUG [Listener at localhost/34157] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44629 2023-05-27 23:01:06,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,003 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 23:01:07,004 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,005 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 23:01:07,005 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-27 23:01:07,005 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,006 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 23:01:07,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37591,1685228466898 from backup master directory 2023-05-27 23:01:07,007 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-27 23:01:07,009 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,009 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-27 23:01:07,009 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 23:01:07,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/hbase.id with ID: 1535c830-dabf-40db-b0d0-b4a173beffab 2023-05-27 23:01:07,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:07,031 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6b48e5c5 to 127.0.0.1:49517 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 23:01:07,041 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ca3b46d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 23:01:07,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-27 23:01:07,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-27 23:01:07,042 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 23:01:07,042 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store-tmp 2023-05-27 23:01:07,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:01:07,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 23:01:07,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:07,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:07,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 23:01:07,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:07,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:07,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 23:01:07,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/WALs/jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37591%2C1685228466898, suffix=, logDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/WALs/jenkins-hbase4.apache.org,37591,1685228466898, archiveDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/oldWALs, maxLogs=10 2023-05-27 23:01:07,055 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/WALs/jenkins-hbase4.apache.org,37591,1685228466898/jenkins-hbase4.apache.org%2C37591%2C1685228466898.1685228467051 2023-05-27 23:01:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34995,DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e,DISK], DatanodeInfoWithStorage[127.0.0.1:45425,DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829,DISK]] 2023-05-27 23:01:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-27 23:01:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:01:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 23:01:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 23:01:07,057 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-27 23:01:07,058 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-27 23:01:07,059 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-27 23:01:07,059 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 23:01:07,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-27 23:01:07,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-27 23:01:07,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 23:01:07,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=729583, jitterRate=-0.07228849828243256}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 23:01:07,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 23:01:07,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-27 23:01:07,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-27 23:01:07,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-27 23:01:07,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-27 23:01:07,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-27 23:01:07,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-27 23:01:07,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-27 23:01:07,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-27 23:01:07,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-27 23:01:07,078 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-27 23:01:07,078 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-27 23:01:07,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-27 23:01:07,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-27 23:01:07,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-27 23:01:07,081 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-27 23:01:07,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-27 23:01:07,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-27 23:01:07,085 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 23:01:07,085 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-27 23:01:07,086 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37591,1685228466898, sessionid=0x1006ee035c60000, setting cluster-up flag (Was=false) 2023-05-27 23:01:07,090 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-27 23:01:07,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,097 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-27 23:01:07,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,103 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/.hbase-snapshot/.tmp 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 23:01:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685228497106 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-27 23:01:07,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,108 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-27 23:01:07,108 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-27 23:01:07,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228467108,5,FailOnTimeoutGroup] 2023-05-27 23:01:07,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228467108,5,FailOnTimeoutGroup] 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,109 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 23:01:07,119 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 23:01:07,119 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-27 23:01:07,119 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4 2023-05-27 23:01:07,125 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:01:07,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 23:01:07,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/info 2023-05-27 23:01:07,127 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 23:01:07,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 23:01:07,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/rep_barrier 2023-05-27 23:01:07,129 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 23:01:07,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 23:01:07,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/table 2023-05-27 23:01:07,131 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 23:01:07,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,132 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740 2023-05-27 23:01:07,132 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740 2023-05-27 23:01:07,134 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 23:01:07,135 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 23:01:07,137 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 23:01:07,137 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=712063, jitterRate=-0.09456564486026764}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 23:01:07,138 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 23:01:07,138 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 23:01:07,138 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 23:01:07,138 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 23:01:07,138 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 23:01:07,138 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 23:01:07,138 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 23:01:07,138 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 23:01:07,139 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-27 23:01:07,139 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-27 23:01:07,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-27 23:01:07,141 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-27 23:01:07,142 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-27 23:01:07,150 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(951): ClusterId : 1535c830-dabf-40db-b0d0-b4a173beffab 2023-05-27 23:01:07,151 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-27 23:01:07,153 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-27 23:01:07,154 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-27 23:01:07,157 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-27 23:01:07,158 DEBUG [RS:0;jenkins-hbase4:44629] zookeeper.ReadOnlyZKClient(139): Connect 0x513c1b68 to 127.0.0.1:49517 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 23:01:07,162 DEBUG [RS:0;jenkins-hbase4:44629] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ffbef3f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 23:01:07,162 DEBUG [RS:0;jenkins-hbase4:44629] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5bc47018, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 23:01:07,171 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44629 2023-05-27 23:01:07,171 INFO [RS:0;jenkins-hbase4:44629] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-27 23:01:07,171 INFO [RS:0;jenkins-hbase4:44629] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-27 23:01:07,172 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1022): About to register with Master. 2023-05-27 23:01:07,172 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,37591,1685228466898 with isa=jenkins-hbase4.apache.org/172.31.14.131:44629, startcode=1685228466937 2023-05-27 23:01:07,172 DEBUG [RS:0;jenkins-hbase4:44629] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-27 23:01:07,175 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47951, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-27 23:01:07,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37591] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,176 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4 2023-05-27 23:01:07,176 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42015 2023-05-27 23:01:07,176 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-27 23:01:07,178 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 23:01:07,178 DEBUG [RS:0;jenkins-hbase4:44629] zookeeper.ZKUtil(162): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,178 WARN [RS:0;jenkins-hbase4:44629] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-27 23:01:07,179 INFO [RS:0;jenkins-hbase4:44629] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 23:01:07,179 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1946): logDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,179 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44629,1685228466937] 2023-05-27 23:01:07,182 DEBUG [RS:0;jenkins-hbase4:44629] zookeeper.ZKUtil(162): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,183 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-27 23:01:07,183 INFO [RS:0;jenkins-hbase4:44629] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-27 23:01:07,184 INFO [RS:0;jenkins-hbase4:44629] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-27 23:01:07,184 INFO [RS:0;jenkins-hbase4:44629] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-27 23:01:07,184 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,186 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-27 23:01:07,187 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,187 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,187 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 DEBUG [RS:0;jenkins-hbase4:44629] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-27 23:01:07,188 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,188 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,189 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,199 INFO [RS:0;jenkins-hbase4:44629] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-27 23:01:07,199 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44629,1685228466937-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,209 INFO [RS:0;jenkins-hbase4:44629] regionserver.Replication(203): jenkins-hbase4.apache.org,44629,1685228466937 started 2023-05-27 23:01:07,209 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44629,1685228466937, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44629, sessionid=0x1006ee035c60001 2023-05-27 23:01:07,209 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-27 23:01:07,209 DEBUG [RS:0;jenkins-hbase4:44629] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,209 DEBUG [RS:0;jenkins-hbase4:44629] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44629,1685228466937' 2023-05-27 23:01:07,209 DEBUG [RS:0;jenkins-hbase4:44629] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-27 23:01:07,209 DEBUG [RS:0;jenkins-hbase4:44629] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44629,1685228466937' 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-27 23:01:07,210 DEBUG [RS:0;jenkins-hbase4:44629] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-27 23:01:07,210 INFO [RS:0;jenkins-hbase4:44629] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-27 23:01:07,210 INFO [RS:0;jenkins-hbase4:44629] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-27 23:01:07,292 DEBUG [jenkins-hbase4:37591] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-27 23:01:07,293 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44629,1685228466937, state=OPENING 2023-05-27 23:01:07,294 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-27 23:01:07,297 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,297 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 23:01:07,297 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44629,1685228466937}] 2023-05-27 23:01:07,312 INFO [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44629%2C1685228466937, suffix=, logDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937, archiveDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs, maxLogs=32 2023-05-27 23:01:07,319 INFO [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937/jenkins-hbase4.apache.org%2C44629%2C1685228466937.1685228467312 2023-05-27 23:01:07,319 DEBUG [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45425,DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829,DISK], DatanodeInfoWithStorage[127.0.0.1:34995,DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e,DISK]] 2023-05-27 23:01:07,451 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,452 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-27 23:01:07,454 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36370, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-27 23:01:07,458 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-27 23:01:07,458 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 23:01:07,459 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44629%2C1685228466937.meta, suffix=.meta, logDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937, archiveDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs, maxLogs=32 2023-05-27 23:01:07,466 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937/jenkins-hbase4.apache.org%2C44629%2C1685228466937.meta.1685228467460.meta 2023-05-27 23:01:07,466 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34995,DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e,DISK], DatanodeInfoWithStorage[127.0.0.1:45425,DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829,DISK]] 2023-05-27 23:01:07,466 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-27 23:01:07,466 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-27 23:01:07,466 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-27 23:01:07,466 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-27 23:01:07,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-27 23:01:07,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:01:07,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-27 23:01:07,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-27 23:01:07,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-27 23:01:07,468 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/info 2023-05-27 23:01:07,469 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/info 2023-05-27 23:01:07,469 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-27 23:01:07,469 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,469 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-27 23:01:07,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/rep_barrier 2023-05-27 23:01:07,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/rep_barrier 2023-05-27 23:01:07,470 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-27 23:01:07,471 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,471 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-27 23:01:07,472 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/table 2023-05-27 23:01:07,472 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/table 2023-05-27 23:01:07,472 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-27 23:01:07,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740 2023-05-27 23:01:07,474 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740 2023-05-27 23:01:07,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-27 23:01:07,477 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-27 23:01:07,478 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=759178, jitterRate=-0.03465628623962402}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-27 23:01:07,478 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-27 23:01:07,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685228467451 2023-05-27 23:01:07,484 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-27 23:01:07,484 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-27 23:01:07,485 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44629,1685228466937, state=OPEN 2023-05-27 23:01:07,487 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-27 23:01:07,487 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-27 23:01:07,488 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-27 23:01:07,488 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44629,1685228466937 in 190 msec 2023-05-27 23:01:07,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-27 23:01:07,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 349 msec 2023-05-27 23:01:07,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 387 msec 2023-05-27 23:01:07,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685228467492, completionTime=-1 2023-05-27 23:01:07,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-27 23:01:07,492 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-27 23:01:07,494 DEBUG [hconnection-0x217b2799-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 23:01:07,496 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36382, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 23:01:07,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-27 23:01:07,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685228527497 2023-05-27 23:01:07,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685228587497 2023-05-27 23:01:07,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-27 23:01:07,502 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37591,1685228466898-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37591,1685228466898-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37591,1685228466898-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37591, period=300000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-27 23:01:07,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-27 23:01:07,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-27 23:01:07,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-27 23:01:07,504 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-27 23:01:07,505 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-27 23:01:07,506 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-27 23:01:07,507 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/.tmp/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,508 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/.tmp/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c empty. 2023-05-27 23:01:07,508 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/.tmp/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,508 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-27 23:01:07,517 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-27 23:01:07,518 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => dc34a13f7e76cf63bf74886cb7769a2c, NAME => 'hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/.tmp 2023-05-27 23:01:07,524 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:01:07,524 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing dc34a13f7e76cf63bf74886cb7769a2c, disabling compactions & flushes 2023-05-27 23:01:07,524 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,524 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,524 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. after waiting 0 ms 2023-05-27 23:01:07,524 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,524 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,524 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for dc34a13f7e76cf63bf74886cb7769a2c: 2023-05-27 23:01:07,526 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-27 23:01:07,527 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228467527"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685228467527"}]},"ts":"1685228467527"} 2023-05-27 23:01:07,529 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-27 23:01:07,529 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-27 23:01:07,530 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228467529"}]},"ts":"1685228467529"} 2023-05-27 23:01:07,531 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-27 23:01:07,538 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=dc34a13f7e76cf63bf74886cb7769a2c, ASSIGN}] 2023-05-27 23:01:07,539 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=dc34a13f7e76cf63bf74886cb7769a2c, ASSIGN 2023-05-27 23:01:07,540 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=dc34a13f7e76cf63bf74886cb7769a2c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44629,1685228466937; forceNewPlan=false, retain=false 2023-05-27 23:01:07,691 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=dc34a13f7e76cf63bf74886cb7769a2c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,691 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228467691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685228467691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685228467691"}]},"ts":"1685228467691"} 2023-05-27 23:01:07,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure dc34a13f7e76cf63bf74886cb7769a2c, server=jenkins-hbase4.apache.org,44629,1685228466937}] 2023-05-27 23:01:07,848 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dc34a13f7e76cf63bf74886cb7769a2c, NAME => 'hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.', STARTKEY => '', ENDKEY => ''} 2023-05-27 23:01:07,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-27 23:01:07,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,849 INFO [StoreOpener-dc34a13f7e76cf63bf74886cb7769a2c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,851 DEBUG [StoreOpener-dc34a13f7e76cf63bf74886cb7769a2c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/info 2023-05-27 23:01:07,851 DEBUG [StoreOpener-dc34a13f7e76cf63bf74886cb7769a2c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/info 2023-05-27 23:01:07,851 INFO [StoreOpener-dc34a13f7e76cf63bf74886cb7769a2c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dc34a13f7e76cf63bf74886cb7769a2c columnFamilyName info 2023-05-27 23:01:07,852 INFO [StoreOpener-dc34a13f7e76cf63bf74886cb7769a2c-1] regionserver.HStore(310): Store=dc34a13f7e76cf63bf74886cb7769a2c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-27 23:01:07,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,853 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:07,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-27 23:01:07,857 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dc34a13f7e76cf63bf74886cb7769a2c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=853828, jitterRate=0.08569873869419098}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-27 23:01:07,858 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dc34a13f7e76cf63bf74886cb7769a2c: 2023-05-27 23:01:07,859 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c., pid=6, masterSystemTime=1685228467845 2023-05-27 23:01:07,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,861 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:07,862 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=dc34a13f7e76cf63bf74886cb7769a2c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:07,862 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685228467862"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685228467862"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685228467862"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685228467862"}]},"ts":"1685228467862"} 2023-05-27 23:01:07,865 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-27 23:01:07,865 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure dc34a13f7e76cf63bf74886cb7769a2c, server=jenkins-hbase4.apache.org,44629,1685228466937 in 170 msec 2023-05-27 23:01:07,867 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-27 23:01:07,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=dc34a13f7e76cf63bf74886cb7769a2c, ASSIGN in 328 msec 2023-05-27 23:01:07,869 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-27 23:01:07,870 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685228467869"}]},"ts":"1685228467869"} 2023-05-27 23:01:07,871 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-27 23:01:07,873 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-27 23:01:07,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 370 msec 2023-05-27 23:01:07,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-27 23:01:07,906 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-27 23:01:07,906 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-27 23:01:07,916 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 23:01:07,919 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-05-27 23:01:07,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-27 23:01:07,927 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-27 23:01:07,931 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-27 23:01:07,944 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-27 23:01:07,947 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-27 23:01:07,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.938sec 2023-05-27 23:01:07,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-27 23:01:07,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-27 23:01:07,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-27 23:01:07,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37591,1685228466898-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-27 23:01:07,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37591,1685228466898-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-27 23:01:07,949 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-27 23:01:07,950 DEBUG [Listener at localhost/34157] zookeeper.ReadOnlyZKClient(139): Connect 0x4bc7f5e7 to 127.0.0.1:49517 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-27 23:01:07,954 DEBUG [Listener at localhost/34157] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60799ee2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-27 23:01:07,955 DEBUG [hconnection-0x605bd26a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-27 23:01:07,957 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36384, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-27 23:01:07,958 INFO [Listener at localhost/34157] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,958 INFO [Listener at localhost/34157] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-27 23:01:07,962 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-27 23:01:07,962 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,962 INFO [Listener at localhost/34157] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-27 23:01:07,962 INFO [Listener at localhost/34157] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-27 23:01:07,964 INFO [Listener at localhost/34157] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1, archiveDir=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs, maxLogs=32 2023-05-27 23:01:07,969 INFO [Listener at localhost/34157] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1/test.com%2C8080%2C1.1685228467964 2023-05-27 23:01:07,969 DEBUG [Listener at localhost/34157] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34995,DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e,DISK], DatanodeInfoWithStorage[127.0.0.1:45425,DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829,DISK]] 2023-05-27 23:01:07,978 INFO [Listener at localhost/34157] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1/test.com%2C8080%2C1.1685228467964 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1/test.com%2C8080%2C1.1685228467969 2023-05-27 23:01:07,978 DEBUG [Listener at localhost/34157] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34995,DS-401e9c90-f8ad-4cac-b1e8-17052aa9a01e,DISK], DatanodeInfoWithStorage[127.0.0.1:45425,DS-e6dc970d-51a2-47ed-b5e7-1e4b2e058829,DISK]] 2023-05-27 23:01:07,979 DEBUG [Listener at localhost/34157] wal.AbstractFSWAL(716): hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1/test.com%2C8080%2C1.1685228467964 is not closed yet, will try archiving it next time 2023-05-27 23:01:07,979 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1 2023-05-27 23:01:07,991 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/test.com,8080,1/test.com%2C8080%2C1.1685228467964 to hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs/test.com%2C8080%2C1.1685228467964 2023-05-27 23:01:07,993 DEBUG [Listener at localhost/34157] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs 2023-05-27 23:01:07,993 INFO [Listener at localhost/34157] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685228467969) 2023-05-27 23:01:07,993 INFO [Listener at localhost/34157] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-27 23:01:07,993 DEBUG [Listener at localhost/34157] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4bc7f5e7 to 127.0.0.1:49517 2023-05-27 23:01:07,994 DEBUG [Listener at localhost/34157] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:07,994 DEBUG [Listener at localhost/34157] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-27 23:01:07,994 DEBUG [Listener at localhost/34157] util.JVMClusterUtil(257): Found active master hash=1465288082, stopped=false 2023-05-27 23:01:07,994 INFO [Listener at localhost/34157] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:07,996 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 23:01:07,996 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-27 23:01:07,996 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:07,996 INFO [Listener at localhost/34157] procedure2.ProcedureExecutor(629): Stopping 2023-05-27 23:01:07,999 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 23:01:08,000 DEBUG [Listener at localhost/34157] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b48e5c5 to 127.0.0.1:49517 2023-05-27 23:01:08,000 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-27 23:01:08,000 DEBUG [Listener at localhost/34157] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:08,000 INFO [Listener at localhost/34157] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44629,1685228466937' ***** 2023-05-27 23:01:08,000 INFO [Listener at localhost/34157] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-27 23:01:08,000 INFO [RS:0;jenkins-hbase4:44629] regionserver.HeapMemoryManager(220): Stopping 2023-05-27 23:01:08,000 INFO [RS:0;jenkins-hbase4:44629] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-27 23:01:08,000 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-27 23:01:08,001 INFO [RS:0;jenkins-hbase4:44629] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-27 23:01:08,001 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(3303): Received CLOSE for dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:08,001 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:08,001 DEBUG [RS:0;jenkins-hbase4:44629] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x513c1b68 to 127.0.0.1:49517 2023-05-27 23:01:08,001 DEBUG [RS:0;jenkins-hbase4:44629] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:08,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dc34a13f7e76cf63bf74886cb7769a2c, disabling compactions & flushes 2023-05-27 23:01:08,002 INFO [RS:0;jenkins-hbase4:44629] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-27 23:01:08,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:08,002 INFO [RS:0;jenkins-hbase4:44629] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-27 23:01:08,002 INFO [RS:0;jenkins-hbase4:44629] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-27 23:01:08,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:08,002 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-27 23:01:08,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. after waiting 0 ms 2023-05-27 23:01:08,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:08,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing dc34a13f7e76cf63bf74886cb7769a2c 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-27 23:01:08,002 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-27 23:01:08,002 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, dc34a13f7e76cf63bf74886cb7769a2c=hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c.} 2023-05-27 23:01:08,002 DEBUG [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1504): Waiting on 1588230740, dc34a13f7e76cf63bf74886cb7769a2c 2023-05-27 23:01:08,003 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-27 23:01:08,003 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-27 23:01:08,003 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-27 23:01:08,003 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-27 23:01:08,003 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-27 23:01:08,003 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-27 23:01:08,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/.tmp/info/3e0335ce267d4e63997293072036fcae 2023-05-27 23:01:08,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/.tmp/info/6114b7dd52704cfba8f95d78ba5ef514 2023-05-27 23:01:08,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/.tmp/info/6114b7dd52704cfba8f95d78ba5ef514 as hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/info/6114b7dd52704cfba8f95d78ba5ef514 2023-05-27 23:01:08,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/info/6114b7dd52704cfba8f95d78ba5ef514, entries=2, sequenceid=6, filesize=4.8 K 2023-05-27 23:01:08,033 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/.tmp/table/a9f7fdf8a3ce48cdae7ea1515b979c78 2023-05-27 23:01:08,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for dc34a13f7e76cf63bf74886cb7769a2c in 32ms, sequenceid=6, compaction requested=false 2023-05-27 23:01:08,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-27 23:01:08,039 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/namespace/dc34a13f7e76cf63bf74886cb7769a2c/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-27 23:01:08,040 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/.tmp/info/3e0335ce267d4e63997293072036fcae as hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/info/3e0335ce267d4e63997293072036fcae 2023-05-27 23:01:08,040 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:08,040 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dc34a13f7e76cf63bf74886cb7769a2c: 2023-05-27 23:01:08,040 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685228467503.dc34a13f7e76cf63bf74886cb7769a2c. 2023-05-27 23:01:08,044 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/info/3e0335ce267d4e63997293072036fcae, entries=10, sequenceid=9, filesize=5.9 K 2023-05-27 23:01:08,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/.tmp/table/a9f7fdf8a3ce48cdae7ea1515b979c78 as hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/table/a9f7fdf8a3ce48cdae7ea1515b979c78 2023-05-27 23:01:08,049 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/table/a9f7fdf8a3ce48cdae7ea1515b979c78, entries=2, sequenceid=9, filesize=4.7 K 2023-05-27 23:01:08,050 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 47ms, sequenceid=9, compaction requested=false 2023-05-27 23:01:08,050 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-27 23:01:08,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-27 23:01:08,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-27 23:01:08,057 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-27 23:01:08,057 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-27 23:01:08,057 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-27 23:01:08,195 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-27 23:01:08,196 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-27 23:01:08,202 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44629,1685228466937; all regions closed. 2023-05-27 23:01:08,203 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:08,207 DEBUG [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs 2023-05-27 23:01:08,207 INFO [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C44629%2C1685228466937.meta:.meta(num 1685228467460) 2023-05-27 23:01:08,207 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/WALs/jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:08,212 DEBUG [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/oldWALs 2023-05-27 23:01:08,212 INFO [RS:0;jenkins-hbase4:44629] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C44629%2C1685228466937:(num 1685228467312) 2023-05-27 23:01:08,212 DEBUG [RS:0;jenkins-hbase4:44629] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:08,212 INFO [RS:0;jenkins-hbase4:44629] regionserver.LeaseManager(133): Closed leases 2023-05-27 23:01:08,212 INFO [RS:0;jenkins-hbase4:44629] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-27 23:01:08,212 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 23:01:08,213 INFO [RS:0;jenkins-hbase4:44629] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44629 2023-05-27 23:01:08,216 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44629,1685228466937 2023-05-27 23:01:08,216 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 23:01:08,216 ERROR [Listener at localhost/34157-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@51385eff rejected from java.util.concurrent.ThreadPoolExecutor@5507c000[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-05-27 23:01:08,217 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-27 23:01:08,217 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44629,1685228466937] 2023-05-27 23:01:08,218 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44629,1685228466937; numProcessing=1 2023-05-27 23:01:08,219 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44629,1685228466937 already deleted, retry=false 2023-05-27 23:01:08,219 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44629,1685228466937 expired; onlineServers=0 2023-05-27 23:01:08,219 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37591,1685228466898' ***** 2023-05-27 23:01:08,219 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-27 23:01:08,219 DEBUG [M:0;jenkins-hbase4:37591] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53690ba0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-27 23:01:08,219 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:08,219 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37591,1685228466898; all regions closed. 2023-05-27 23:01:08,219 DEBUG [M:0;jenkins-hbase4:37591] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-27 23:01:08,219 DEBUG [M:0;jenkins-hbase4:37591] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-27 23:01:08,219 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-27 23:01:08,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228467108] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685228467108,5,FailOnTimeoutGroup] 2023-05-27 23:01:08,219 DEBUG [M:0;jenkins-hbase4:37591] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-27 23:01:08,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228467108] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685228467108,5,FailOnTimeoutGroup] 2023-05-27 23:01:08,220 INFO [M:0;jenkins-hbase4:37591] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-27 23:01:08,221 INFO [M:0;jenkins-hbase4:37591] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-27 23:01:08,221 INFO [M:0;jenkins-hbase4:37591] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-27 23:01:08,221 DEBUG [M:0;jenkins-hbase4:37591] master.HMaster(1512): Stopping service threads 2023-05-27 23:01:08,221 INFO [M:0;jenkins-hbase4:37591] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-27 23:01:08,221 ERROR [M:0;jenkins-hbase4:37591] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-27 23:01:08,221 INFO [M:0;jenkins-hbase4:37591] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-27 23:01:08,221 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-27 23:01:08,222 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-27 23:01:08,222 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-27 23:01:08,222 DEBUG [M:0;jenkins-hbase4:37591] zookeeper.ZKUtil(398): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-27 23:01:08,222 WARN [M:0;jenkins-hbase4:37591] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-27 23:01:08,222 INFO [M:0;jenkins-hbase4:37591] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-27 23:01:08,222 INFO [M:0;jenkins-hbase4:37591] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-27 23:01:08,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-27 23:01:08,223 DEBUG [M:0;jenkins-hbase4:37591] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-27 23:01:08,223 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:08,223 DEBUG [M:0;jenkins-hbase4:37591] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:08,223 DEBUG [M:0;jenkins-hbase4:37591] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-27 23:01:08,223 DEBUG [M:0;jenkins-hbase4:37591] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:08,223 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-05-27 23:01:08,231 INFO [M:0;jenkins-hbase4:37591] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/70a665d33c424ef7b4f5a782976d0e04 2023-05-27 23:01:08,235 DEBUG [M:0;jenkins-hbase4:37591] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/70a665d33c424ef7b4f5a782976d0e04 as hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/70a665d33c424ef7b4f5a782976d0e04 2023-05-27 23:01:08,239 INFO [M:0;jenkins-hbase4:37591] regionserver.HStore(1080): Added hdfs://localhost:42015/user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/70a665d33c424ef7b4f5a782976d0e04, entries=8, sequenceid=66, filesize=6.3 K 2023-05-27 23:01:08,240 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 17ms, sequenceid=66, compaction requested=false 2023-05-27 23:01:08,241 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-27 23:01:08,242 DEBUG [M:0;jenkins-hbase4:37591] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-27 23:01:08,242 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/829ac82a-8af2-7ab6-5793-2456df9a33d4/MasterData/WALs/jenkins-hbase4.apache.org,37591,1685228466898 2023-05-27 23:01:08,244 INFO [M:0;jenkins-hbase4:37591] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-27 23:01:08,245 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-27 23:01:08,245 INFO [M:0;jenkins-hbase4:37591] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37591 2023-05-27 23:01:08,248 DEBUG [M:0;jenkins-hbase4:37591] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37591,1685228466898 already deleted, retry=false 2023-05-27 23:01:08,396 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:08,396 INFO [M:0;jenkins-hbase4:37591] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37591,1685228466898; zookeeper connection closed. 2023-05-27 23:01:08,396 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): master:37591-0x1006ee035c60000, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:08,496 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:08,496 INFO [RS:0;jenkins-hbase4:44629] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44629,1685228466937; zookeeper connection closed. 2023-05-27 23:01:08,496 DEBUG [Listener at localhost/34157-EventThread] zookeeper.ZKWatcher(600): regionserver:44629-0x1006ee035c60001, quorum=127.0.0.1:49517, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-27 23:01:08,497 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3bb8497d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3bb8497d 2023-05-27 23:01:08,497 INFO [Listener at localhost/34157] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-27 23:01:08,497 WARN [Listener at localhost/34157] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 23:01:08,501 INFO [Listener at localhost/34157] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 23:01:08,605 WARN [BP-1129631025-172.31.14.131-1685228466363 heartbeating to localhost/127.0.0.1:42015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 23:01:08,605 WARN [BP-1129631025-172.31.14.131-1685228466363 heartbeating to localhost/127.0.0.1:42015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1129631025-172.31.14.131-1685228466363 (Datanode Uuid 7a66eb69-207f-4a73-a144-9f10f6743c62) service to localhost/127.0.0.1:42015 2023-05-27 23:01:08,606 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/dfs/data/data3/current/BP-1129631025-172.31.14.131-1685228466363] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:08,606 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/dfs/data/data4/current/BP-1129631025-172.31.14.131-1685228466363] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:08,607 WARN [Listener at localhost/34157] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-27 23:01:08,609 INFO [Listener at localhost/34157] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 23:01:08,712 WARN [BP-1129631025-172.31.14.131-1685228466363 heartbeating to localhost/127.0.0.1:42015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-27 23:01:08,712 WARN [BP-1129631025-172.31.14.131-1685228466363 heartbeating to localhost/127.0.0.1:42015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1129631025-172.31.14.131-1685228466363 (Datanode Uuid 066e4359-1f32-40a1-b99d-40282790b567) service to localhost/127.0.0.1:42015 2023-05-27 23:01:08,712 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/dfs/data/data1/current/BP-1129631025-172.31.14.131-1685228466363] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:08,713 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7c645e02-8657-f8bc-3a4f-ab5289ce4f27/cluster_ff1b712e-105b-0112-f869-6dd4aaf159c9/dfs/data/data2/current/BP-1129631025-172.31.14.131-1685228466363] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-27 23:01:08,722 INFO [Listener at localhost/34157] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-27 23:01:08,832 INFO [Listener at localhost/34157] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-27 23:01:08,843 INFO [Listener at localhost/34157] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-27 23:01:08,854 INFO [Listener at localhost/34157] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=132 (was 107) - Thread LEAK? -, OpenFileDescriptor=561 (was 537) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=46 (was 41) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=3204 (was 3284)